cuDNN issue when weight.requires_grad=False

Hi,
I wrote the following code to customize a new conv2d layer.

If I set weight.requires_grad=True, there is no issue.
If I set weight.requires_grad=False, The new layer works for forwarding, but I experience error when run loss.backward:

if weight.requires_grad=True, there is no cuDNN issue. why is there cuDNN issue when weight.requires_grad=False?

File "train.py", line 229, in train
    loss.backward()
  File "/root/anaconda3/lib/python3.8/site-packages/torch/_tensor.py", line 255, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/root/anaconda3/lib/python3.8/site-packages/torch/autograd/__init__.py", line 147, in backward
    Variable._execution_engine.run_backward(
RuntimeError: Unable to find a valid cuDNN algorithm to run convolution
    def forward(self, *inputs):
        if len(inputs) == 4:
            x, weight, bias, add_tensor = inputs
        else:
            if self.bias_term:
                x, weight, bias = inputs
            else:
                x, weight = inputs
                bias = None
        x = F.conv2d(
            x,
            weight,
            bias,
            self.strides,
            self.padding,
            self.dilation,
            self.groups,
        )

Could you post an executable code snippet to reproduce the issue as well as the output of python -m torch.utils.collect_env, please?

@ptrblck ,
Thank you!
The issue was oberved, but it disappeared automatically. Thank you anyway!