Getting tensor full of zeros from my model

I was trying my first NN on pytorch with refrence to this article on coloring black and white images. After implementing the code in pytorch, what i am getting is tensor full of zeros out of my model on every image.

This method converts RGB image in LAB format and then tries to predict ‘a’ and ‘b’ of lab format using convolution network. I have made a ‘MODEL’ class for this and using its two object ‘model_a’ and ‘model_b’ for ‘a’ and ‘b’ parameters respectively.
I don’t know whats happening can someone please help me understanding what my issue is.
Thank you


Skimming through your code it looks like the view on your target tensors might be wrong. Since your image channels are in dim2 when loading the image, you should use image.permute(2, 0, 1) to shift the channels to dim0.

Also, could you explain your code a bit?
It seems you are feeding the a and b image planes from the lab format to your models.
As far as I understand, you would like to use grayscale images as inputs, i.e. the l plane, and predict a and b.

1 Like

ohh man, i really messed it up, (i think these are the signs of being newbie). Thanks for replying. and yes you are right i have to feed grayscale image.:rofl:.
Really i was getting totally mad about these zero tensor, thanks man.

I didn’t get your image.permute thing, if you can please provide any resource for it(i don’t know about dim2 or dim0 channels). It would be really helpful.
And again thanks alot @ptrblck.

I have changed the input to gray scale image. but the outputs are still zeros. It would be very helpful if you can lighten-up that ‘dim2’ and ‘image.permute’ .
After first optimizer step in my model, the values of tensor becomes zero…
Thank you

If you use view to move the channel dimension around, your image might end up corrupted. Try to visualize a single plane using your method and then again using permute, e.g. with matplotlib.

Regarding your zero outputs, could you try to lower your learning rate a bit and try it again?

1 Like

i checked the value of both image and the tensor, they seemed to be same.
you know when i ran the model 3000 times on a single image, even than the loss is not converging.
Now i totally don’t know what is happening.