How to resize parameters

Hello,

I am doing some feature selection and I realise at the end of an epoch that some features are irrelevant. I don’t want to process them anymore. The best way is to remove them from the dataset and change the size of the matrix. But it seems it is impossible to update the shape of Parameters because it breaks the backward process (forward is fine).

See the example code:

import torch
from torch.autograd import Variable
x = Variable(torch.rand(10))
W = Variable(torch.rand(10, 10), requires_grad=True)
loss = (W.mul(x)).sum().backward() # Works 
W.grad.data.zero_()
W.data = W[:, 1:6].data # Feature Selection
loss = W.mul(x[1:6]).sum().backward # Breaks

Do you have any idea how to deal with that properly ? I really don’t care loosing the history on the weights at all since I only do it at the end of an epoch.

Thank you for your help

I also realized that there is no way to reuse the optimizer and continue the training after the resize without changing its internal state (and therefore having code that may break in the future releases)