Calculating Infinity norm for a Variable and Tensor

I am trying to get infinity norm for a tensor and variable but while one is always giving a value of 1 other throws up an error, Any suggestions as to how could we go about implementing it.

import torch
from torch.autograd import Variable
tor = torch.rand([3,3])
var = Variable(torch.rand([3,3]))
c1 = 'inf'
c = float(c1)
tor_norm = tor.norm(c)
tor_norm
1.0
>>> var_norm = var.norm(c)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/torch3/lib/python3.5/site-packages/torch/autograd/variable.py", line 396, in norm
    return super(Variable, self).norm(p)
RuntimeError: value cannot be converted to type float without overflow: inf

What about

var.norm(p=float("inf"))

I had also tried this. it gives the same result. For tensor it gives 1 and for Variable the error.

>>> var.norm(p=float("inf"))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/torch3/lib/python3.5/site-packages/torch/autograd/variable.py", line 396, in norm
    return super(Variable, self).norm(p)
RuntimeError: value cannot be converted to type float without overflow: inf
>>> tor.norm(p=float("inf"))
1.0

Strange that it would work for a tensor but not a variable.

I am out of my depth here.

This is a bug in pytorch 0.3 that has been fixed in master. You can build the code from source or wait for the next release.

1 Like

Thanks @richard!
I actually tried torch.max(torch.abs(var)) as an alternate to the infinity norm. This seems to be working for a simple example, But when I use in a loss function for a given model, it gives CUDA MEMORY ERROR for batch size of as low as 4.

def l2_regu(mdl):
        l2_reg = None
        for W in mdl.parameters():
                if W.ndimension() < 2:
                        continue
                else:
                        w_tmp = W
                        if l2_reg is None:
                                l2_reg = (torch.max(torch.abs(w_tmp)))**2
                        else:   
                                l2_reg = l2_reg + (torch.max(torch.abs(w_tmp)))**2
      
        return l2_reg

Running this throws an error  saying:
File "train.py", line 208, in train
    oloss =  l2_regu(model)
  File "train.py", line 38, in l2_reg_ortho
    l2_reg = l2_reg + (torch.max(torch.abs(w_tmp)))**2
RuntimeError: cuda runtime error (2) : out of memory at /pytorch/torch/lib/THC/generic/THCStorage.cu:58
(torch3) bansa01@vita2:~/pytorch_wideres/tmp_inf/WideResNet-pytorch$