Pytorch layer with no trainable parameters

I am just playing around a bit with pytorch and have a model which has the following structure:

Layer A - 100 trainable parameters
Layer B - 0 trainable parameters
Layer 3 - 5 trainable parameters

In my forward function, I have something like:

def forward(x, y):
    a = layer_a(x)
    b = layer_b(a)
    loss = layer_c(b, y)
    return {"loss": loss}

The layer_b is simply defined as:

class LayerB(nn.Module):
     def __init__(self, params):
         super().__init__()
         self.params = params

    def forward(self, x):
        return x.clone()

Now when I run this model, it makes one step through the training process and then crashes with:

one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [32, 1]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead

I am trying to understand why this is happening. Do you think it is necessary to call x.clone().detach() since no trainable parameters are in that layer? I ask because at least the model training does not crash when I do that.

If you do x.clone().detach() it is possible that layer_a is never being updated by backpropagation, because this completely breaks the computation graph.
Using a torch.nn.Identity in layer_b might be more convenient, but is that where the error comes from?

  • If so, try this:
class LayerB(nn.Module):
		def __init__(self, params):
				...
				self.net = nn.Identity()
   	
   	    def forward(self, x):
             return self.net(x)
  • Otherwise, it is possible that the error comes from the wrong assignment somewhere, as for example : x += ... with x requiring the gradient…

…

Thanks for the reply:

So this layer is basically just:

def forward(self, x):
        return x.clone()

So, this is why I am puzzled as there is no such update but I will keep digging.