RuntimeError: element 0 of variables tuple is volatile


(Josue Ortega) #1

I am trying to run a custom model for UCF-101. I am having a problem while running my code:
I am able to run model.forward(input) and loss but the moment I get into loss.backward() is giving me this error:

Traceback (most recent call last):
File /home/josueortc/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py, line 156, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File home/josueortc/anaconda3/lib/python3.6/site-packages/torch/autograd/init.py, line 98, in backward
variables, grad_variables, retain_graph)
RuntimeError: element 0 of variables tuple is volatile

The code is:

    # compute output for the number of timesteps selected by train loader
    optimizer.zero_grad()
    output = model.forward(x=input_var)

    # Calculate the loss function based on the criterion. For example, UCF-101 is CrossEntropy
    loss = criterion(output, target_var)

    # measure accuracy and record loss
    prec1, prec5 = accuracy(output.data, target, topk=(1, 5))
    losses.update(loss.data[0], input.size(0))
    top1.update(prec1[0], input.size(0))
    top5.update(prec5[0], input.size(0))

    loss.backward()

    optimizer.step()

As you can see, I am already using optimizer.zero_grad() before computing the loss, so I am not sure why is saying that the element 0 of variables is a tuple.


(Yun Chen) #2

try below to find whether the problem comes from input , model or criterion.

    optimizer.zero_grad()
    print(input_var.volatile)
    output = model.forward(x=input_var)
    print(output.volatile)
    loss = criterion(output, target_var)
    print(loss.volatile)
    loss.backward()

(Josue Ortega) #3

I trace it back into the output, but within the model I have this, which is a recurrent unit:

    for i in range(timesteps):
        indices = torch.LongTensor([i])
        #ids = torch.LongTensor([1]).cuda()
        pyramidal1, self.state = self.unit(torch.squeeze(x[:,i,...], 1), self.state)
        print(pyramidal1.volatile)
        print("Timesteps: ", i)

The interesting thing is that it becomes volatile in the second timestep, so is there something wrong about the way I am handling the data?


Pre-trained model parameters do not update on custom loss
(Yun Chen) #4

something wrong with self.unit?
what is self.state? It’s dangerous to use a variable/parameter in a for-loop if you don’t hand it right.


(Josue Ortega) #5

I found the error. It was in the init of self.unit. I was using self.state for a volatile temporal variable. Thank you!