Leaf variable has been moved into the graph interior - know the Reason, how to work around?

Hi, I got the above Error, I think that I know why, after some reading about this Error, correct me if I’m wrong:

       self.decoder.weight[:10,:] = torch.nn.Parameter(self.common_emb_weights,requires_grad = False)

The problem I still need this functionality, which means, I have a tensor with size x, and I want just a part of him to have consistent parameters, that won’t be changed.
I was reading that this kind of “messing around” may cause this error.

What can I do to get this Functionality with no error?

Thanks!!

Is there Any savior in the crowd??

Would it be possible to reset the particular part of your weight matrix after each iteration?
If so, you could wrap your code in a with torch.no_grad() statement and add if after optimizer.step() was performed.

What does it mean reset after each iteration?
I have a fully connected layer at the end of the network, and I want that those weights will have certain values, now I need to insert this value some how…

Sorry for not being clear enough.
I meant you could wrap your code sample in a with torch.no_grad(); block and just call it after each iteration in your training loop. This would make sure the weights have the desired values in the next iteration.

Maybe I’m not getting it right, but no matter what, I will need to initialize the weights some way, and as I understood from similar issues this is what makes the above problem, isn’t it ?

Yes, this will throw the error, if you don’t wrap the code in the aforementioned guard.
Would this code work for you?

class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        self.fc = nn.Linear(10, 10)
        
    def forward(self, x):
        x = self.fc(x)
        return x


model = MyModel()
x = torch.randn(1, 10)
target = torch.randint(0, 10, (1,))
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)

for epoch in range(10):
    optimizer.zero_grad()
    output = model(x)
    loss = criterion(output, target)
    loss.backward()
    optimizer.step()
    print('Epoch {}, loss {}'.format(epoch, loss.item()))
    
    with torch.no_grad():
        # Manipulate weight matrix
        model.fc.weight[:5] = nn.Parameter(torch.ones(5, 10))
2 Likes

I will check , thanks.