How to fix "list index out of range" error with updateGradInput?

Currently I am getting this error when trying to run my function forwards and backwards to evaluate loss and gradient.

The full error message is:

Running optimization with L-BFGS
Traceback (most recent call last):
  File "test2.py", line 309, in <module>
    x, losses = optim.lbfgs(feval, img, optim_state)
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/optim/lbfgs.py", line 66, in lbfgs
    f, g = opfunc(x)
  File "test2.py", line 264, in feval
    grad = model.updateGradInput(x, dy)
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Sequential.py", line 47, in updateGradInput
    self.gradInput = self.modules[0].updateGradInput(input, currentGradOutput)
IndexError: list index out of range

Specifically, the issue is with:

  grad = model.updateGradInput(x, dy)

This is the function used to evaluate the loss and gradient:

num_calls = [0]

def feval(x):
  num_calls[0] += 1
  model.updateOutput(x)
  grad = model.updateGradInput(x, dy)
  return loss, grad.view(grad.nelement())

This is what I use to run the function:

if params.optimizer == 'lbfgs':
  optim_state = {
    "maxIter": params.num_iterations,
    "verbose": True,
    "tolX":-1,
    "tolFun":-1,
  }

x, losses = optim.lbfgs(feval, img, optim_state)

The y and dy variables come from:

y = model.updateOutput(img)
dy = img.clone().zero_()

I don’t understand why the updateOutput function works, but the updateGradInput is giving me issues?

This is the Sequential.py code which is referenced in the error message:

On the line

self.gradInput = self.modules[0].updateGradInput(input, currentGradOutput)

the only list that I can see is self.modules.

The error suggests that this list is empty. If that is the case then your legacy sequential has no registered submodules and updateOutput acts like the identity mapping.

I guess the issue is that the “model setup” step is not working properly?

This is the model setup:


#content_layers_default = ['conv_4']
#style_layers_default = ['conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5']

content_layers_default = ['relu_4']
style_layers_default = ['relu_1', 'relu_2', 'relu_3', 'relu_4', 'relu_5']



def create_model(cnn, style_image_caffe, content_image_caffe, style_weight=params.style_weight, content_weight=params.style_weight, content_layers=content_layers_default, style_layers=style_layers_default):


    cnn = copy.deepcopy(cnn)
    content_losses = []
    style_losses = []

    model = nn.Sequential()  # the new Sequential module network
    #gram = GramMatrix()  # we need a gram module in order to compute style targets

    # move these modules to the GPU if possible:
    if use_cuda:
        model = model.cuda()
        #gram = gram.cuda()

    i = 1
    for layer in list(cnn):


        if isinstance(layer, nn.ReLU):
            name = "relu_" + str(i)
            model.add_module(name, layer)

            if name in content_layers:
                # add content loss:
                target = model(content_image_caffe).clone()
                content_loss = ContentLoss(target, content_weight)
                model.add_module("content_loss_" + str(i), content_loss)
                content_losses.append(content_loss)

            if name in style_layers:
                # add style loss:
                target_feature = model(style_image_caffe).clone()
                target_feature_gram = gram(target_feature).cuda()
                style_loss = StyleLoss(target_feature_gram, style_weight)
                model.add_module("style_loss_" + str(i), style_loss)
                style_losses.append(style_loss)

            i += 1
      
    return model, style_losses, content_losses

I have also tried using a VGG model that was converted to PyTorch via: https://github.com/jcjohnson/pytorch-vgg/issues/3

My guess is that model.add_module(...) is not working as you expect.

Can I suggest a refactor?
First make a list of submodules…

modules = []
for layer in list(cnn):
    ...
    # instead of model.add_module(name, layer)
    modules.append(layer)

then create the model with the list of submodules

model = nn.Sequential(modules)

@jpeg729 I just tried that solution, and I got this error:

TypeError: __init__() takes exactly 1 argument (2 given)

I don’t think that my two loss modules are being setup properly.

When I add print commands to each module, nothing shows up in the terminal:

i = 1
for layer in list(cnn):


    if isinstance(layer, nn.ReLU):
        print("ReLU")
        name = "relu_" + str(i)
        model.add_module(name, layer)

        if name in content_layers:
            # add content loss:
            print("Content Loss")
            target = model(content_image_caffe).clone()
            content_loss = ContentLoss(target, content_weight)
            model.add_module("content_loss_" + str(i), content_loss)
            content_losses.append(content_loss)

        if name in style_layers:
            # add style loss:
            print("Content Loss")
            target_feature = model(style_image_caffe).clone()
            target_feature_gram = gram(target_feature).cuda()
            style_loss = StyleLoss(target_feature_gram, style_weight)
            model.add_module("style_loss_" + str(i), style_loss)
            style_losses.append(style_loss)

        i += 1
  
return model, style_losses, content_losses

Using print(layer) is giving me an output like this:


ReLU(inplace)

Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))

ReLU(inplace)

torch.nn.legacy doesn’t support nn.Conv2d, so I had to use ReLU.

I don’t understand.
Which line gives that error? A stack trace would be helpful.
At first glance that code looks identical to your original code.

@jpeg729 This is the full script: https://gist.github.com/ProGamerGov/f735c1360207b420c4f920d69853e157

And this is the full error message:

ubuntu@ip-Address:~/test-project$ python test2.py -content_image examples/inputs/tubingen_512.jpg -style_image examples/inputs/seated-nude_512.jpg
/usr/local/lib/python2.7/dist-packages/torchvision/transforms/transforms.py:156: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
  "please use transforms.Resize instead.")
Model Loaded
Capturing content targets
Capturing style target
Running optimization with L-BFGS
Traceback (most recent call last):
  File "test2.py", line 309, in <module>
    x, losses = optim.lbfgs(feval, img, optim_state)
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/optim/lbfgs.py", line 66, in lbfgs
    f, g = opfunc(x)
  File "test2.py", line 264, in feval
    grad = model.updateGradInput(x, dy)
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Sequential.py", line 47, in updateGradInput
    self.gradInput = self.modules[0].updateGradInput(input, currentGradOutput)
IndexError: list index out of range
ubuntu@ip-Address:~/test-project$

The TypeError: __init__() takes exactly 1 argument (2 given) error came from model = nn.Sequential(modules), when I tried your solution.

But I don’t think the ReLU layers are being detected: https://gist.github.com/ProGamerGov/f735c1360207b420c4f920d69853e157#file-test2-py-L105

For whatever reason, it looks like if isinstance(layer, nn.ReLU): never detects a ReLU layer.

That is because you aren’t using torch.nn.Sequential, but rather torch.legacy.nn.Sequential which I am not used to.

However torch.legacy.nn.Sequential has no method named add_module, so if your model creation loop for layer in list(cnn): was working, then I think it would raise an error each time it tried to add a module.

torch.legacy.nn.Sequential has a method named add which takes only one argument, like this model.add(layer)

So both model setup functions haven’t been working because of torch.legacy.nn, but they never gave any errors because for some reason this if statement isn’t working properly:

if isinstance(layer, nn.ReLU):

Yes. It would seem so.