Weird behavior in torch.renorm for L-infinity norms

In theory,
torch.renorm(input, float('inf'), 0, maxnorm)
should have the same behavior as
torch.clamp(input, -maxnorm, maxnorm)

But that’s not what I’m seeing:

>>> foo = torch.renorm(foo, float('inf'), 0, 0.5)
>>> foo

 0.5000  0.5000
 0.5000  0.5000
[torch.FloatTensor of size 2x2]

>>> foo = torch.renorm(foo, float('inf'), 0, 0.5)
>>> foo

 0.2500  0.2500
 0.2500  0.2500
[torch.FloatTensor of size 2x2]

While I do realize that I should probably be using clamp here, it’d be nice if I didn’t have to have conditional branches in my code where I want to renorm

This is a bug. Our code always calculates inf norm as 1. Thanks for reporting. I created an issue here: https://github.com/pytorch/pytorch/issues/6817

fyi, this is fixed on master