Hello, so I am working on this generative model, and part of the loss requires the output of torch.max(torch.nn.functional.conv2d(perdict_img,filter)) with different filters. These filters are supposed to keep the prefixed weight during the training.
I set them as requires_grad=False and then I have this error: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn.
Is there any way to make it work without embedding this computation as some fixed layer of the model?
Are there other parameters that are part of your model that you would like to train? If none of your inputs require grad, it wouldn’t make sense to run backward, because the backward graph wasn’t create in the first place.
Thank you for the sample! I went back to check line by line and found it’s another intermediate operation that does not support backpropagation and have it fixed now.