[resolved] Torch.max element wise max fails for results of autograd.grad

I’ve built from latest Master and am unable to take the element-wise max of a gradient calculation. Is this a bug or am I doing something wrong? Thanks!

Fails:

a = Variable(torch.ones(5, 2), requires_grad=True)
b = a ** 2
c = b ** 2
g = autograd.grad(outputs=c, inputs=b,
                              grad_outputs=torch.ones(b.size()),
                              create_graph=True, retain_graph=True, only_inputs=True)[0]
print(g)
b = torch.FloatTensor([0])
torch.max(g, b)

Also fails:

a = Variable(torch.ones(5, 2), requires_grad=True)
b = a ** 2
c = b ** 2
g = autograd.grad(outputs=c, inputs=b,
                              grad_outputs=torch.ones(b.size()),
                              create_graph=True, retain_graph=True, only_inputs=True)[0]
print(g)
b = torch.ones(g.size())
torch.max(g, b)

The error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-91-63397f258c3b> in <module>()
      7 print(g)
      8 b = torch.ones(g.size())
----> 9 torch.max(g, b)

/usr/lib/python3.5/site-packages/torch/autograd/variable.py in max(self, dim, keepdim)
    454         if isinstance(dim, Variable):
    455             return Cmax.apply(self, dim)
--> 456         return Max.apply(self, dim, keepdim)
    457 
    458     def min(self, dim=None, keepdim=False):

/usr/lib/python3.5/site-packages/torch/autograd/_functions/reduce.py in forward(cls, ctx, input, dim, keepdim, additional_args)
    152             if additional_args:
    153                 args = additional_args + args
--> 154             output, indices = fn(*args)
    155             ctx.save_for_backward(indices)
    156             ctx.mark_non_differentiable(indices)

TypeError: max received an invalid combination of arguments - got (torch.FloatTensor, bool), but expected one of:
 * no arguments
 * (torch.FloatTensor other)
 * (int dim)
      didn't match because some of the arguments have invalid types: (torch.FloatTensor, bool)
 * (int dim, bool keepdim)

Actually, mistake on my part, nevermind. --> b = Variable(torch.FloatTensor([0]))