One of the variables needed for gradient computation has been modified by an "inplace" operation Error

I have met the problem “one of the variables needed for gradient computation has been modified by an “inplace” operation: [torch.floattensor [2048]] is at version 2; expected version 1 instead.” Thanks to all the possible solutions to it, but I just find that they don’t include my situation. I tried the " torch.autograd.set_detect_anomaly(True)", then it told me that “error detected in native batchnorm backward. traceback of forward call that caused the error:”. It did correspond to the layer “torch.nn.BatchNorm1d”, but I just wonder how to set this layer as “inplace” like relu. Thank you for your time and patience!