Changing number of parameters in optimizer during training

Hello everyone

I am trying to optimize images (not model) so I have an optimizer instantiated like

optimizer_img = torch.optim.SGD([image, ], lr=args.lr_img, momentum=0.5)

where image is 100x3x32x32 tensor.

For next iteration, I want to use image tensor of shape 50x3x32x32.

At loss.backward() I am getting “RuntimeError: Function SliceBackward0 returned an invalid gradient at index 0 - got [50, 3, 32, 32] but expected shape compatible with [100, 3, 32, 32]”

I have tried reinitializing the optimizer (which I don’t want to do).

Tried to update like this “optimizer_img.param_groups[0][‘params’][0].data = new_image_tensor”

Also tried, partail function solution here https://github.com/pytorch/pytorch/issues/97603

The error persists in all cases. Is there any possible solution?

Thank you.

Why don’t you want to reinitialize the optimizer given you are optimizing entirely new samples in each iteration?

Actually the samples are not entirely new, they are the samples optimized in previous iteration and then we process them so the number of samples become half of what it was.

Other than that I am still getting the same error even if I am reinitializing the optimizer in each iteration.