Using linear layers? New user transfering from keras

dgriff,
I made that change in the training and still receive the same exact error

oh I see you canā€™t have this linear layer ā€œself.out = nn.Linear(10,1)ā€ -this leaves output size 1

you need that size 10 output which corresponds to the 10 digits you are trying to choose from

sorry I missed that before :astonished::sweat_smile:

also this looks wrong:

test_loss += F.nll_loss(output, target, size_average=False).data[0]
change to
test_loss += F.nll_loss(output, target).data[0]

also not sure what you doing here in this function def num_flat_features(self, x):

but donā€™t see you actually using in code so shouldnā€™t matter

Hi,

oh, sorry. This is what I meant by score vector, but I looked at the wrong net when checking whether the next iteration fixed it. :confused:

As a general comment: If you put your code between triple backticks, i.e.

[quote]```
your code here
```
[/quote]
or link to gist or somewhere, you increase your chance of anyone being curious enough to actually give your code a spin and make it much easier for people to read.

Best regards

Thomas

Great, thank you very much, the code is now running fine it appears.

Actually it is running, but the loss is barely coming down in training. I even tried adding more layers to no effect.
I know convolutional nets perform better on MNIST, but feedforward MLPs should work fine as well, Iā€™ve seen people use them successfully on the dataset, and even in my experience it worked fine on keras. Are you sure everything is being processed correctly?

I have a full example here with everything you need working:
https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyTorch/21-PyTorch-CIFAR-10-Custom-data-loader-from-scratch.ipynb

I suggest you copy and run it and then amend your code.

Best,