Pytorch RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /opt/conda/conda

I have been trying to teach myself Pytorch using some Kaggle data on the seedlings competition. My code was working but taking a long time to run so decided to try to freeze all layers except the last, so I added the below code:

All code: https://github.com/christopher-ell/Deep_Learning_Begin/blob/master/fastai_Lecture%202%20-%20Seedlings%20(Pytorch%20only).ipynb

Code that started the problem:

num_ftrs = resnet34.fc.in_features

## Freeze all but the last layers
for param in resnet34.parameters():
    ## Each tensor has the flag requires_grad, setting it to false allows freezes
    ## the parmaeter associated with it
    param.requires_grad = False
    
# Parameters of newly constructed modules have requires_grad=True by default
## Create new modules that will become final layer
num_ftrs = resnet34.fc.in_features
print(num_ftrs)
## Give final layers a linear transform with twelve outputs one for each category
resnet34.fc = nn.Linear(num_ftrs, 12)

Any other pointers for my code would also be appreciated.

Thanks

It seems your target is not in the range [0, n_classes_1]. Based on your code you should have 12 classes.

Thanks for your respsonse ptrblck.

Where else do I need to tell it it is 12 classes, as I am saying it has 12 outputs in:

resnet34.fc = nn.Linear(num_ftrs, 12)

Your target values should be in the range [0, 11].
So you could check in each iteration if your target has invalid values.

Thanks for your help ptrblck!

Been a bit busy (and frustrating) I am still having issues. I was looking at the below link:

That persons problem was that the last layer wasn’t outputting the number of categories, but mine is implementing the correct number of layers resnet34.fc = nn.Linear(num_ftrs, 12).

The line that it says is not working is the loss = criteria(output, target) and printing the output and target gives one 12 x batch_size for output and batch_size x 1 for target.

Thanks

Could you check the values inside of your target? Probably they are out of the valid range.

Thanks the target values of the first batch before it stops is

Target
11
1
1
9
10
9
7
5
2
2
6
3
6
6
7
12

Is it because it has a 12 in it and Python assumes 0-11 while the target output is 1-12?
If there is a problem with my target values why does it work before I put in the code to freeze all layers?

Thanks

Yes, it’s because of the 12. If your mapping is in [1, 12], just subtract 1 and it should work.
That’s strange, since it shouldn’t work.
Could you try to remove the freezing part and check if again?
It would be interesting to know, if it was ignored somehow.

Sorry ptrblck, still having some problems understanding. The data loaded into the tensor thats returning the numbers 1-12 is image data taken from 12 folders of images of particular seedlings. So when I tried to subtract 1 from train_raw I get an error that you can’t subtract an int from an image.

So the program is actually creating the 1-12 numbers itself, so I can I get it to create 0-11 instead.

Below is the code I used to load the data:

## Image loaders
## Dataset transforms puts the images in tensor form
normalise = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
train_raw = dataset.ImageFolder(PATH+"train", transform=transforms.Compose([transforms.RandomResizedCrop(sz),
                                                                            transforms.RandomHorizontalFlip(),
                                                                            transforms.ToTensor(),
                                                                           normalise]))
train_loader = DataLoader(train_raw, batch_size=batch_size, shuffle=True, num_workers=4)

valid_raw = dataset.ImageFolder(PATH+"valid", transform=transforms.Compose([transforms.CenterCrop(sz),
                                                                            transforms.ToTensor(),
                                                                           normalise]))
valid_loader = DataLoader(valid_raw, batch_size=batch_size, shuffle=False, num_workers=4)

Your approach should work.
Could you check, if you have any additional folders (maybe hidden) in your root folder?
I created fake folders like:

root
    - folder0
        - image0.jpg
    - folder1
        - image0.jpg
    - folder2
        -image0.jpg

These are the classes:

dataset = datasets.ImageFolder(root='root')

print(dataset.classes)
> ['folder0', 'folder1', 'folder2']
print(dataset.class_to_idx)
> {'folder2': 2, 'folder1': 1, 'folder0': 0}

Your targets shouldn’t be images, but long integers. Are you sure you are subtracting from the right value?

It’s running !!

The folder list was below:

[‘.ipynb_checkpoints’, ‘Black-grass’, ‘Charlock’, ‘Cleavers’, ‘Common Chickweed’, ‘Common wheat’, ‘Fat Hen’, ‘Loose Silky-bent’, ‘Maize’, ‘Scentless Mayweed’, ‘Shepherds Purse’, ‘Small-flowered Cranesbill’, ‘Sugar beet’]
{‘.ipynb_checkpoints’: 0, ‘Black-grass’: 1, ‘Charlock’: 2, ‘Cleavers’: 3, ‘Common Chickweed’: 4, ‘Common wheat’: 5, ‘Fat Hen’: 6, ‘Loose Silky-bent’: 7, ‘Maize’: 8, ‘Scentless Mayweed’: 9, ‘Shepherds Purse’: 10, ‘Small-flowered Cranesbill’: 11, ‘Sugar beet’: 12}

This is on Google cloud and there seems to be a folder called ’.ipynb_checkpoints’ which is 0. I did it by changing the number of outputs in the final layer to 13:

resnet34.fc = nn.Linear(num_ftrs, 13)

I’m not sure what to do with the invisible folder besides changing the number of outputs in the final layer, is there any other way I can deal with that? Will this create problems by having an unnecessary layer?

Thanks so much.

I would move the valid data into a subfolder. As far as I know .ipynb_checkpoints will be created in your current working directory where your notebook was created.