In theory,
torch.renorm(input, float('inf'), 0, maxnorm)
should have the same behavior as
torch.clamp(input, -maxnorm, maxnorm)
But that’s not what I’m seeing:
>>> foo = torch.renorm(foo, float('inf'), 0, 0.5)
>>> foo
0.5000 0.5000
0.5000 0.5000
[torch.FloatTensor of size 2x2]
>>> foo = torch.renorm(foo, float('inf'), 0, 0.5)
>>> foo
0.2500 0.2500
0.2500 0.2500
[torch.FloatTensor of size 2x2]
While I do realize that I should probably be using clamp here, it’d be nice if I didn’t have to have conditional branches in my code where I want to renorm