RuntimeError: One of the differentiated Tensors does not require grad

I want a batchnorm with no bias. so I write like this:

self.bn = BatchNorm1d(2048)
self.bn.bias.requires_grad_(False)  # no shift

And what I want to do is meta-learning, I need to calculate the grad, so I write

grad_info = torch.autograd.grad(loss_meta_train, self.model.module.params(), create_graph=True)

to require backward, but I get this error:

RuntimeError: One of the differentiated Tensors does not require grad

the Tensor is BatchNorm’s bias, how can I solve this question

What about using allow_unused=True?

grad_info = torch.autograd.grad(loss_meta_train, self.model.module.params(), create_graph=True, allow_unused=True)

The question is quite similar to RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior