Loss.backward() does not work after changing layer size

I try to double the size of a layer. It works and everthing is fine for forward pass but for the backward pass I got error. If I create the model from scratch and resize the layers, it works fine. I think it is about the autograd graph created by the first run. Is there any workaround that you might suggest?

Error just happens here: https://github.com/erogol/Net2Net/blob/master/examples/train_mnist.py#L155

Error:
Traceback (most recent call last):
File “train_mnist.py”, line 155, in
train(epoch)
File “train_mnist.py”, line 116, in train
loss.backward()
File “/home/egolge/miniconda2/envs/py3k/lib/python3.6/site-packages/torch/autograd/variable.py”, line 157, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File “/home/egolge/miniconda2/envs/py3k/lib/python3.6/site-packages/torch/autograd/init.py”, line 98, in backward
variables, grad_variables, retain_graph)
RuntimeError: The expanded size of the tensor (320) must match the existing size (480) at non-singleton dimension 1. at /home/egolge/libs/pytorch/torch/lib/THC/generic/THCTensor.c:323

If anyone comes up with the same problem, here is my workaround. You need to create a new model and then copy.deepcopy() the existing model then do the necessary changes you like. It works but I think it is not the genuine solution.

1 Like