Replace diagonal elements with vector

I have been searching everywhere for something equivalent to the following in PyTorch, but I cannot find anything.

L_1 = np.tril(np.random.normal(scale=1., size=(D, D)), k=0)

L_1[np.diag_indices_from(L_1)] = np.exp(np.diagonal(L_1))

I guess there is no way to replace the diagonal elements in such an elegant way using Pytorch.

1 Like
# A has size k * k
k = A.size(0)
A.as_strided([k], [k + 1]).copy_(vector)
4 Likes

Thanks @SimonW. I guess it works in a similar fashion in case of Variables, i.e. my_variable.data()ā€¦

Please just directly operate on Variable rather than var.data if you want to trace history (and do backward etc).

@SimonW Iā€™m getting the error ā€œAttributeError: ā€˜Variableā€™ object has no attribute ā€˜as_stridedā€™ā€ although I have the latest PyTorch version. What is the problem?

oh i seeā€¦ Yeah, it might not be available in 0.3.1. Before the next release, you can try advanced indexing A[[1,2,3], [1,2,3]]=4 but that might break backward if some graph depends on the original overwritten values. Or you can multiply by a matrix then add another matrixā€¦ Yeah, these are not idealā€¦

So, is there any efficient way to do the replacement in PyTorch?

Hi,

The solution with advances indexing is the way to go for 0.3.1 I think.
Keep in mind that inplace operations are not always possible when working with Variables because the original value might be needed to compute the backward pass.

The thing is that my torch version is ā€˜0.3.1.post2ā€™ and I do not seem to have the above mentioned functions.

You should be able to do: A[[1,2,3], [1,2,3]]=4 or A[range(size), range(size)] = your_diag_value.

Will it work if the ā€œdiagonal_valueā€ is a vector though?

Yes it will work if your_diag_value is a 1D tensor.

ok thank you. This will not affect the backward pass right?

It will affect it because the diagonal values of the original matrix are not used to compute the output anymore.
See the code sample below:

import torch
from torch.autograd import Variable

size = 10
full = Variable(torch.rand(size, size), requires_grad=True)
new_diag = Variable(torch.rand(size), requires_grad=True)

# Do this because we cannot change a leaf variable inplace
full_clone = full.clone()

full_clone[range(size), range(size)] = new_diag

full_clone.sum().backward()

# Should be a diagonal of 0 and 1 for the rest
print(full.grad)
# Should be full of 1
print(new_diag.grad)

So, there is no way to change the diagonal values and keep the backward pass unaffected? Seems strange.

Why do you want to do that? The gradients that you compute will be completely wrong thenā€¦

I want to force the diagonal of my covariance matrix to be positive and differentiable wrt the backward pass.

Ok, not sure that makes senseā€¦
But here is the code to do it :slight_smile:

import torch
from torch.autograd import Variable

size = 10
full = Variable(torch.rand(size, size), requires_grad=True)
new_diag = Variable(torch.rand(size), requires_grad=True)

# Do this because we cannot change a leaf variable inplace
full_clone = full.clone()

# WARNING: using data here will break the graph and this
# operation will not be tracked by the autograd engine.
# Hence giving "wrong" gradients
full_clone.data[range(size), range(size)] = new_diag.data

full_clone.sum().backward()

# Should be full of 1
print(full.grad)
# Should be None (equivalent to full of 0)
print(new_diag.grad)
1 Like

I was doing something like this:

    self.L_1 = Parameter(torch.randn(dim, dim), requires_grad=True)
    self.L_1.data = torch.tril(self.L_1.data)
    self.log_diag = Parameter(torch.diag(self.L_1.data), requires_grad=True)
    self.log_diag.data = torch.exp(self.log_diag.data)
    self.mask = Parameter(torch.diag(torch.ones_like(self.log_diag.data)))
    self.L = Parameter(self.mask.data * torch.diag(self.log_diag.data) + (1. - self.mask.data) * self.L_1.data, requires_grad=True).cuda()

If the backward doesnā€™t need content of that cov matrix you have, then just modifying inplace is fine. (Run .backward to find out.) Otherwise, you can do a clone and then modify inplace.