Is weight_norm different in different python versions?

Hi! guys! I have a question about weight_norm.

My PyTorch version is 1.0.1 in all environments. By the way, I have set the same seeds. But when I use weight_norm in python2.7 and python3.7 respectively, I got different weight_g but the same weight_v.

Moreover, I got the same weight_g when I init the linear layer with a lower dimension. I am wondering what happened when I changed the initial dimension.

Hope someone could give a detailed analysis! Thanks!

For example:

Init with lower dimension

# python 3.7
weight_norm(nn.Linear(20, 20), dim=None).weight_g
"Parameter containing: tensor(2.6137, requires_grad=True)"

# python 2.7
weight_norm(nn.Linear(20, 20), dim=None).weight_g
"Parameter containing: tensor(2.6137, requires_grad=True)"

Init with higer dimension

# python 3.7
weight_norm(nn.Linear(2048, 2048), dim=None).weight_g
"tensor(295.5971, requires_grad=True)"

# python 2.7
weight_norm(nn.Linear(2048, 2048), dim=None).weight_g
"tensor(26.1274, requires_grad=True)"

I just tried to reproduce this issue, but get similar values for Python2.7 and 3.7 in both use cases (~2.6 and ~26.1).

Could you update to the latest stable release (1.4) in your Python3.7 environment and rerun the code?

That being said, note that Python2.7 support is dropped and you shouldn’t use it anymore. :wink:

Hi, ptrblck. I am so glad you give me a reply. Thanks!
I have been updated my Pytorch to v1.4 and rerun the code in Python2.7 and 3.7 respectively. As you have been said, I also get similar values in two Python environment. It seems like Pytorch have been fixed this problem as they update the version. :sweat_smile: But I still don’t know why this issue happens. Maybe some computations in C++ are different? :flushed:

It seems to be weird, that the Python version somehow interacts with the results.
However, we should let Python2.7 rest in peace and stick to 3.x. :wink: