Gradient cannot be computed, help!

I have the following code. It gives me the following error
one of the variables needed for gradient computation has been modified by an inplace operation

lam = 0  # lambda
c = 0  # penalty
mu = 0 # other parameters
# outer loop
 while outer_step < self.max_outer_iter:
            inner_step = 0
            # inner loop
            while inner_step < self.max_inner_iter:
                epochs += 1
                inner_step += 1
                trainning_data ...

                # compute the loss
                loss = self.entire_l(trainning_data, lam,c, mu)

                # Zero Gradient Container
                optimizer.zero_grad()
                # Optimize step
                loss.backward(retain_graph=True)
                optimizer.step()

                # Decay Learning Rate
                scheduler.step()

            # update the slack variable
            lam = lam + c * something
            outer_step += 1

I am new to Pytorch, but I do not know what what’s wrong with this code.

can you plz pate the error message

C:\Users\lli4\Anaconda3\lib\site-packages\torch\autograd_init_.py:130: UserWarning: Error detected in MmBackward. Traceback of forward call that caused the error:
File “D:/workspace/POMDP_CODE/ubvo/ubvo.py”, line 302, in
agent.train(product_pomdp)
File “D:/workspace/POMDP_CODE/ubvo/ubvo.py”, line 139, in train
e_v = self.e_vio(trajs, self.mu)
File “D:/workspace/POMDP_CODE/ubvo/ubvo.py”, line 24, in e_vio
f[i] = self.traj_v(trajs[i], mu)
File “D:/workspace/POMDP_CODE/ubvo/ubvo.py”, line 32, in traj_v
h_g[i] = self.step_v(y, ny, mu)
File “D:/workspace/POMDP_CODE/ubvo/ubvo.py”, line 41, in step_v
g = mu * torch.log(torch.sum(f)) - self.value.getValue(y, mu)
File “D:/workspace/POMDP_CODE/ubvo/ubvo.py”, line 265, in getValue
temp[i] = torch.exp(self.getQValue(y, i) / mu)
File “D:/workspace/POMDP_CODE/ubvo/ubvo.py”, line 259, in getQValue
value = self.model(input)
File “C:\Users\lli4\Anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “D:\workspace\POMDP_CODE\models\nn_model.py”, line 15, in forward
x = self.l3(x)
File “C:\Users\lli4\Anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “C:\Users\lli4\Anaconda3\lib\site-packages\torch\nn\modules\linear.py”, line 93, in forward
return F.linear(input, self.weight, self.bias)
File “C:\Users\lli4\Anaconda3\lib\site-packages\torch\nn\functional.py”, line 1692, in linear
output = input.matmul(weight.t())
(Triggered internally at …\torch\csrc\autograd\python_anomaly_mode.cpp:104.)
Variable.execution_engine.run_backward(
Traceback (most recent call last):
File “D:/workspace/POMDP_CODE/ubvo/ubvo.py”, line 302, in
agent.train(product_pomdp)
File “D:/workspace/POMDP_CODE/ubvo/ubvo.py”, line 132, in train
loss.backward(retain_graph=True)
File “C:\Users\lli4\Anaconda3\lib\site-packages\torch\tensor.py”, line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\Users\lli4\Anaconda3\lib\site-packages\torch\autograd_init
.py", line 130, in backward
Variable._execution_engine.run_backward(
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [128, 1]], which is output 0 of TBackward, is at version 4; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

it is because you are changing parameters some where before calling optimizer.step(), the give code isn’t that detailed to figure it out but may be the problem is arising due to some step in self.entire_l(),

it would be helpful if you provide with entire_l method.

Is it because in the outer loop I change the value of the lam?

ye that’s a potential possibility!!

In that case, is there a way to change the value in the network dynamically for each outer loop? I am new to the Pytorch.

i suggest using standard loop like

y_hat = model(y)
loss = loss_function(y_hat,y)
loss.backward()
with torch.no_grad():
                             optimizer.step()
                             optimizer.zero_grad()

with model doing all the forward calculation and loss_function computing loss just by using y_hat and y

I appreciate your fast reply, I tried with

There are several errors.

  1. Weights are not being updated.
  2. It seems does not solve my error.
C:\Users\lli4\Anaconda3\lib\site-packages\torch\autograd\__init__.py:130: UserWarning: Error detected in MulBackward0. Traceback of forward call that caused the error:
  File "D:/workspace/POMDP_CODE/ubvo/ubvo.py", line 300, in <module>
    agent.train(product_pomdp)
  File "D:/workspace/POMDP_CODE/ubvo/ubvo.py", line 156, in train
    self.value.lam = self.value.lam + self.value.c * e_v
 (Triggered internally at  ..\torch\csrc\autograd\python_anomaly_mode.cpp:104.)
  Variable._execution_engine.run_backward(
Traceback (most recent call last):
  File "D:/workspace/POMDP_CODE/ubvo/ubvo.py", line 300, in <module>
    agent.train(product_pomdp)
  File "D:/workspace/POMDP_CODE/ubvo/ubvo.py", line 125, in train
    loss.backward()

 C:\Users\lli4\Anaconda3\lib\site-packages\torch\autograd\__init__.py:130: UserWarning: Error detected in MulBackward0. Traceback of forward call that caused the error:
  File "D:/workspace/POMDP_CODE/ubvo/ubvo.py", line 300, in <module>
    agent.train(product_pomdp)
  File "D:/workspace/POMDP_CODE/ubvo/ubvo.py", line 156, in train
    self.value.lam = self.value.lam + self.value.c * e_v
 (Triggered internally at  ..\torch\csrc\autograd\python_anomaly_mode.cpp:104.)
  Variable._execution_engine.run_backward(
Traceback (most recent call last):
  File "D:/workspace/POMDP_CODE/ubvo/ubvo.py", line 300, in <module>
    agent.train(product_pomdp)
  File "D:/workspace/POMDP_CODE/ubvo/ubvo.py", line 125, in train
    loss.backward()
  File "C:\Users\lli4\Anaconda3\lib\site-packages\torch\tensor.py", line 221, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "C:\Users\lli4\Anaconda3\lib\site-packages\torch\autograd\__init__.py", line 130, in backward
    Variable._execution_engine.run_backward(
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time.
def train(self, env):
    torch.autograd.set_detect_anomaly(True)
    self.env = env

    # set hyperparameters
    self.num_traj = paras.num_traj  # the number of trajectories
    self.max_time = paras.max_time
    self.max_inner_iter = paras.max_inner_iter
    self.max_outer_iter = paras.max_outer_iter
    self.beta = paras.beta
    self.eta = paras.eta

    # Create the network
    input_size = 66  # 64 inputs for b, 1 input for q, 1 input for a
    hidden_size = 128
    output_size = 1
    net = Network(input_size, hidden_size, output_size)

    # create a value obj
    self.value = PytorchValue(net, env)
    self.value.mu = paras.mu  # temperature
    self.value.lam = paras.lam
    self.value.c = paras.c

    # print the parameter's shape
    for name, param in self.value.model.named_parameters():
        print(name, '\t\t', param.shape)

    OPTIMIZER_CONSTRUCTOR = torch.optim.SGD  # This is the SGD algorithm.
    ### TensorBoard Writer Setup ###
    log_name = str(self.eta) + str(OPTIMIZER_CONSTRUCTOR.__name__)
    writer = SummaryWriter(log_dir="../logs/" + log_name)
    print("To see tensorboard, run: tensorboard --logdir=logs/")

    # add model into the tensorboard
    x = torch.randn(1, input_size)
    writer.add_graph(net, x)

    # Create the optimizer
    optimizer = OPTIMIZER_CONSTRUCTOR(self.value.model.parameters(), lr=self.eta)
    # Create the learning rate scheduler
    scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=paras.max_inner_iter, gamma=0.9)

    outer_step = 0
    epochs = 0

    e_v = np.Inf

    while outer_step < self.max_outer_iter:
        o_e_v = e_v  # set the old expectation as the last round of the expectation
        inner_step = 0
        print("-------------------------------------------------------------------------------------------")
        print("Outer iteration", '\t\t', outer_step, '\t\t', 'C', '\t\t', self.value.c, '\t\t', 'lambda', '\t\t',
              self.value.lam)

        while inner_step < self.max_inner_iter:
            epochs += 1
            inner_step += 1
            # sample some trajectories
            trajs = list()
            for i in range(self.num_traj):
                y = self.env.sample_belief()
                trajs.append(self.env.sample_trajectory(y, self.value, self.max_time))

            # compute the loss
            loss = self.entire_l(trajs)
            loss.backward()

            # check if the weights are updated
            a = list(self.value.model.parameters())[0].clone()
            with torch.no_grad():
                optimizer.zero_grad()
                optimizer.step()
            b = list(self.value.model.parameters())[0].clone()
            print("Weights Updated: ", not torch.equal(a.data, b.data))

            # compute the expectation of h of g
            # TODO: there is a problem here, I change the tensor here.
            e_v = self.e_vio(trajs)

            print('Inner iteration', '\t\t', inner_step, '\t\t', 'loss:', loss.data, '\t\t', 'e_v:', e_v)

            writer.add_scalar('Expected violation', e_v, global_step=epochs)
            writer.add_scalar('L', loss, global_step=epochs)
            writer.add_scalar('Value of' + str((self.env.b0, 0)), self.value.getValue((self.env.b0, 0)),
                              global_step=epochs)

            # Decay Learning Rate
            scheduler.step()

        # if the expectation is not decreased, then increase the penalty term
        if abs(e_v) > 0.9 * abs(o_e_v):
            self.value.c = self.beta * self.value.c  # c is increasing to infinity
        else:
            self.value.c = self.value.c

        # update the slack variable
        self.value.lam = self.value.lam + self.value.c * e_v
        outer_step += 1

    print("Finish the training!")
    writer.close()
    pass

can you plz provide me with entire_l method of your class

and plz check if you are doing loss.backward() more than once!!!
and also remove inplace operations like += or add_() with the model parameter of outputs!

    def e_vio(self, trajs):
        f = torch.zeros(len(trajs))
        for i in range(len(trajs)):
            f[i] = self.traj_v(trajs[i])
        return torch.mean(f)

    def traj_v(self, traj):
        h_g = torch.zeros(len(traj) - 1)
        for i in range(len(traj) - 1):
            y = traj[i][-1]
            ny = traj[i + 1][-1]
            h_g[i] = self.step_v(y, ny)
        return torch.sum(h_g)

    def step_v(self, y, ny):
        f = torch.zeros(len(self.env.action_set))
        for i in range(len(self.env.action_set)):
            a = list(self.env.action_set.keys())[i]
            f[i] = torch.exp((self.env.product_belief_reward(y, a) + self.value.getValue(ny)) / self.value.mu)

        g = self.value.mu * torch.log(torch.sum(f)) - self.value.getValue(y)
        return torch.max(g, torch.Tensor([0])) ** 2

    def traj_l(self, traj):
        l = torch.zeros(len(traj))
        for i in range(len(traj) - 1):  # for each state in the simulated history
            y = traj[i][-1]
            ny = traj[i + 1][-1]
            step_v = self.step_v(y, ny)
            l[i] = self.value.getValue(y) + self.value.lam * step_v + self.value.c / 2 * torch.abs(step_v) ** 2
        return torch.sum(l)

    def entire_l(self, trajs):
        l = torch.zeros(len(trajs))
        for i in range(len(trajs)):
            l[i] = self.traj_l(trajs[i])  # for the ith history
        return torch.sum(l)

This is all the code.

Hi, I think I have found the problem. After I use the backward, I use the network’s forward to get some other values.

Which may cause the weights to change (I am not sure). Basically, my steps are the following:

  1. backwork, updates the weights
  2. forward, may change the weights (not sure)
    Is there a way to avoid this?
    Thanks,

I think this is the reason.

indeed , you found the problem!

There is a similar problem that one encounters when implementing meta-learning algorithms as it too requires in-place operations of leaf variables (nn.Parameters specifically) because there need to be some intermediate parameter updates as part of the meta-learning algorithm. I was stuck in a similar problem a few weeks ago. The idea is to basically clone your leaf tensors using .clone() and use these new cloned variables for your computation so that gradients can still flow through these desired, leaf variables.

Why do we want to do it using .clone()?

Because the leaf tensors don’t keep any history/track of the operations applied to them and they also don’t allow in-place operations. If you want to understand this a little more, read my response to my own question. You can also take cues from inner_loop(self, task) function in this code or inner_loop(self,task) function of class SineMAML() in this code. You can even take a look at a slightly complicated method which uses hooks in this code (refer to class MetaLearner(object)).

P.S: Why is retain_graph=True ? You aren’t computing higher-order derivatives, right? It certainly doesn’t seem to be the case from the code snippet you have shared.

P.P.S: I may have given a slightly complex and convoluted answer but I thought that this added context might help you get a better idea of not only the problem you asked for but potentially for a problem that could arise in case you wanted to use higher-order derivatives (because I saw retain_graph=True in your code snippet). Please do ask for clarifications if my answer is not clear.

2 Likes