Hello,
I am using Spectral Normalization codehttps://github.com/pytorch/pytorch/blob/master/torch/nn/utils/spectral_norm.py in my Network. But I am encountering an issue. Since new_empty is not present in my version of pytorch, which is used in function:
def apply(module, name, n_power_iterations, eps):
fn = SpectralNorm(name, n_power_iterations, eps)
weight = module._parameters[name]
height = weight.size(0)
u = normalize(weight.new_empty(height).normal_(0, 1), dim=0, eps=fn.eps)
module.register_parameter(fn.name + "_org", weight)
module.register_buffer(fn.name + "_u", u)
module.register_forward_pre_hook(fn)
return fn
I am using weight.data.new instead of weight.new_empty.
But using this causes error in line: v = normalize(torch.matmul(weight_mat.t(), u), dim=0, eps=self.eps)
Since it expects u to be Parameter.
So in the apply function I make following changes:
u = Parameter(normalize(weight.data.new(height).normal_(0, 1), dim=0, eps=fn.eps))
But making this change, throws another error saying:
File "/home/bansa01/pytorch_wideres/tmp_spectral_norm/WideResNet-pytorch/spectral_norm.py", line 42, in __call__
setattr(module, self.name + '_u', u)
File "/home/bansa01/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 407, in __setattr__
.format(torch.typename(value), name))
TypeError: cannot assign 'torch.autograd.variable.Variable' as buffer 'weight_u' (torch.Tensor or None expected)
Where it expects u to be of type Tensor or None? So it becomes a kind of LOOP. How should I deal with this?
Thanks,
Nitin