Nn.utils.weight_norm not working for me

Hi,

I am trying to use the weight normalization using the function stated in the title.
But I am getting the following errors:

    x = nn.utils.weight_norm(F.relu(self.conv1(x)), name='weight')
    File "/home/graviton/anaconda2/lib/python2.7/site-packages/torch/nn/utils/weight_norm.py", line 98, in weight_norm
    WeightNorm.apply(module, name, dim)
    File "/home/graviton/anaconda2/lib/python2.7/site-packages/torch/nn/utils/weight_norm.py", line 34, in apply
    weight = getattr(module, name)
    File "/home/graviton/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py", line 65, in __getattr__
    return object.__getattribute__(self, name)
AttributeError: 'Variable' object has no attribute 'weight'

Where am I doing it wrong? Thanks.

the F.relu needs to be outside of the weight_norm definition i think. It should be:

x = nn.utils.weight_norm(self.conv1(x), name='weight')

Hi Soumith,

Thanks for the clarification. But, it still doesn’t solve my problem.

My code after your suggestion is:

x = F.relu(nn.utils.weight_norm(self.conv1(x), name='weight'))

And it still throws the same error:

*** AttributeError: 'Variable' object has no attribute 'weight'

I also ran the following sample code to test if it’s happening at other places too:

class Net(nn.Module):                                                            
def __init__(self):                                                          
    super(Net, self).__init__()                                              
    self.fc1 = nn.Linear(320, 50)                                            
                                                                             
def forward(self, x):                                                        
    x = self.fc1(x)                                                          
    x = nn.utils.weight_norm(x, name='weight')                               
    return x                                                                 
                                                                             
                                                                             
model=Net()                                                                      
model.cuda()                                                                     
x = Variable(torch.randn(64, 320).normal_(-1, 1).cuda())                         
xf = model(x)

And it’s also throwing the same error.

Thanks for bearing with me.

you are using it wrong.

Weightnorm has to be registered in the constructor, on a module.

self.conv1(x) returns a Variable.

Rather, you need to do something like this (where you defined self.conv1)

self.conv1 = nn.utils.weight_norm(nn.Conv2d(...), name='weight')

Read the docs (especially the example in the docs) for more details: http://pytorch.org/docs/master/nn.html#torch.nn.utils.weight_norm

6 Likes

Thanks Soumith. It worked.

Silly me. I should’ve read the docs more carefully.