How to freeze variable outside of the medel

Hello, so I am working on this generative model, and part of the loss requires the output of torch.max(torch.nn.functional.conv2d(perdict_img,filter)) with different filters. These filters are supposed to keep the prefixed weight during the training.

I set them as requires_grad=False and then I have this error: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn.

Is there any way to make it work without embedding this computation as some fixed layer of the model?

Are there other parameters that are part of your model that you would like to train? If none of your inputs require grad, it wouldn’t make sense to run backward, because the backward graph wasn’t create in the first place.

So the output of the model is perdict_img and it contains the parameters I would like to train.

Setting the filter to not require grad in a convolution is fine (see below):

import torch
a = torch.rand(10, 5, 5, 5, requires_grad=True)
b = torch.rand(5, 5, 2, 2, requires_grad=False)
out = torch.max(torch.nn.functional.conv2d(a, b)) 
out.requires_grad # True

We’d probably need some more context here. Do you have a small snippet to run that can reproduce the issue?

Thank you for the sample! I went back to check line by line and found it’s another intermediate operation that does not support backpropagation and have it fixed now.

1 Like