Error While using Spectral Normalization Code

Hello,

I am using Spectral Normalization codehttps://github.com/pytorch/pytorch/blob/master/torch/nn/utils/spectral_norm.py in my Network. But I am encountering an issue. Since new_empty is not present in my version of pytorch, which is used in function:

    def apply(module, name, n_power_iterations, eps):
        fn = SpectralNorm(name, n_power_iterations, eps)
        weight = module._parameters[name]
        height = weight.size(0)

        u = normalize(weight.new_empty(height).normal_(0, 1), dim=0, eps=fn.eps)
        module.register_parameter(fn.name + "_org", weight)
        module.register_buffer(fn.name + "_u", u)

        module.register_forward_pre_hook(fn)
return fn

I am using weight.data.new instead of weight.new_empty.
But using this causes error in line: v = normalize(torch.matmul(weight_mat.t(), u), dim=0, eps=self.eps) Since it expects u to be Parameter.

So in the apply function I make following changes:
u = Parameter(normalize(weight.data.new(height).normal_(0, 1), dim=0, eps=fn.eps))

But making this change, throws another error saying:

  File "/home/bansa01/pytorch_wideres/tmp_spectral_norm/WideResNet-pytorch/spectral_norm.py", line 42, in __call__
    setattr(module, self.name + '_u', u)
  File "/home/bansa01/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 407, in __setattr__
    .format(torch.typename(value), name))
TypeError: cannot assign 'torch.autograd.variable.Variable' as buffer 'weight_u' (torch.Tensor or None expected)

Where it expects u to be of type Tensor or None? So it becomes a kind of LOOP. How should I deal with this?

Thanks,
Nitin

The root cause is likely that the code in master is after the big Tensor/Variable merge while you are using a version that doesn’t have it. The most straightforward way of dealing with this is upgrading your PyTorch version. It’s good for you!
If you don’t want to do that, you can sprinkle .data and Variable appropriately. But honestly, I think it is a dead end.
Note that the code in master had a fix for a memory leak applied today (Issue #7261), so you probably want to have that, too.

Best regards

Thomas

Thanks Thomas! I have pulled out the latest Pytorch Version. I had a slightly unrelated question regarding github I am slightly new to using Github, So say I want to see, what are the changes, which went as part of issue #7261, How do I do that?

Regards,
Nitin

There is a link to the pull request fixing the bug and then you click on changed files to find the changes. If you used the latest version, you are fine.

But I see you started a new thread.

Best regards

Thomas