I made a conv. network using Pytorch that can identify the breed (out of 120) from a dog image. However, the network's outputs approach 0, and consequently the success rate approaches 1/120. Any ideas?

Code here: https://github.com/spencerkraisler/Dog_Breed_Identification

At first the images were normalized to be filled with floats between 0 and 1, however the images were near black. And the problem was still the same: weights approached 0 as the network was trained. I then took out the normalization (so the image tensors are filled with integers between 0 and 255) yet the problem still persisted.

The images don’t seem to be distorted when I use the showTorchImage() method.

Do you need to use nn.MSELoss for your classification task?
If not, could you switch to nn.CrossEntropyLoss and remove the one-hot-encoding as the CrossEntropyLoss needs a target storing class indices.