Trying to make a customized MINST dataset according to the corresponding class

I implemented My_MINST to grab the data of corresponding classes.
There is a label input, when I set the label as [0,1,2,3] which means I want classes(or digits in MNIST) 0,1,2,3
It works fine. But When I set the label as [1,2,3,4], I got the following error.When I set my label to [1,2,3,4]
I have to change my network to self.fc5 = nn.Linear(300, 5) to make my network works.
It seems like pytorch start as default from label 0.

THCudaCheck FAIL file=/pytorch/torch/lib/THC/ line=100 error=59 : device-side assert triggered
Traceback (most recent call last):
File “”, line 132, in
File “”, line 104, in train
File “/home/shixian/.local/lib/python3.5/site-packages/torch/autograd/”, line 167, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File “/home/shixian/.local/lib/python3.5/site-packages/torch/autograd/”, line 99, in backward
variables, grad_variables, retain_graph)
RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/torch/lib/THC/

Here is my screenshot of my code
In My_MINST, I changed the init function

I found the reason:
for this function:
the C has to >=0 and <= C-1
is there any way to get around this.
e.g. I want a class C 1,3,4,5 in MINST dataset

You probably want to create a mapping from your labels [1, 3, 4, 5] to [0, 1, 2, 3] and transform your labels.

But what I want to do is sequential learning. I want my network first learn classes [0,1]. Then learn classes [3,4] on the same network and test the classes [1,2] to check the catastrophic forgetting problem. The nll_function will not work on the classes [3,4].

Okay. For that to work then, your input to nll_loss needs to be of size N * C, where C = 5. In particular, input[n][0], input[n][1], input[n][2] should equal 0 for all n, and input[n][3], input[n][4] should (probably) be non-zero.

I will implement this structure.