Clamp a Tensor dimension-wise


say x is a tensor with shape (M, N) while min and max have shape (M,).

I basically want to do:

for m in range(M):
    torch.clamp(x[m, :], min=min[m], max=max[m], out=x[m,:])

That would be applying clamp dimension-wise. Can I avoid the for loop ?


answering my own question:

   torch.max(x, min[:, None]),
   max[:, None])

less elegant than a call to clamp but works fine