How to project a weight matrix on to a constraint?

I want to implement a projected gradient descent algorithm in pytorch. I have a weight matrix C such that -

C = Variable(torch.ones(BATCH_SIZE[0],BATCH_SIZE[1]).cuda(), requires_grad=True). While training I do the following -

sum_loss.backward()
optimizer.step()
C=models.project( C )

Here, sum_loss is the total loss I plan to optimize. After the optimization step optimizer.step() I use my customized project() to project C into the constraint set. However, I get the following error -

RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.

Is there any way to get rid of this problem so that I can implement projected gradient descent successfully ?

In the parameter update itself, you want to use the associated Tensor C.data. (There will likely be something new in pytorch 0.4, as Variable and Tensor will be merged.) I would also recommend to take a peek at the optimizers’ source code for inspiration.

Best regards

Thomas