Hello everyone! I’m trying to train a convolutional network and I want the convolutional kernels of the last
Conv2d layer to have an unit norm.
The layer is stored in the variable
I have added this method to the model class (note that this layer has only one channel as output, so the weights size is
def make_convolutions_unit_norm(self): with torch.no_grad(): W = self.end.weight.data.clone() num_channels = W.shape for i in range(num_channels): W[0,i] = W[0,i]/W[0,i].norm() self.end.weight.data = W return
The hope is to do so, after the backpropagation step, without messing up the autograd. However, this for some reason doesn’t work, as I get the following error during backpropagation after calling
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 1, 2048, 96]], which is output 0 of CudnnConvolutionBackward, is at version 4; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Any idea about how I can achieve this?