How to clamp tensor to some range without doing an inplace operation?

I have a tensor t_p of shape (N,6). I want to clamp 0 and 4 column to min=0.0, max=1.0
I tried this:

        t_p[:,0] = torch.clamp(t_p[:,0],min=0.0, max=1.0) 
        t_p[:,4] = torch.clamp(t_p[:,4],min=0.0, max=1.0)

But during the backward pass I am having an error:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

How can I do this efficiently without doing an inplace operation??
Any suggestion would be helpful :slight_smile:

Unfortunately, it is not possible because clamping operation is not differentiable.

What if you did a clone to avoid the in-place operations?
Something like this:

a = torch.randn(8,16, requires_grad = True)
b = a.clone()
b[:, 3] = a[:,3].clamp(0,1)
b[:, 0] = a[:,0].clamp(0,1)
bFinal = b.sum()
bFinal.backward()

This does not throw error for me.

1 Like