Is there a way to clip with a tensor?

Hi everyone,

Suppose I have a tensor

mytensor = torch.tensor([10, 20, 30, 40)]

Each node has a different threshold. Lets say that min thresholds are:

mythreshold = torch.tensor([2, 50, 25, 100])

I wanted to do torch.clamp(mytensor, min = mythreshold ) and get as output

torch.tensor([10, 50, 30, 100]).

Is it possible?

Thanks!

torch.where should work:

mytensor = torch.tensor([10, 20, 30, 40])
mythreshold = torch.tensor([2, 50, 25, 100])

torch.where(mytensor < mythreshold, mythreshold, mytensor)
> tensor([10, 50, 30, 100])
1 Like

Worked like a charm. Thanks for your awesome work here at the forum @ptrblck

1 Like

Hi Ptrblck,

I want to use Border22 with the values just higher than 0, i tried x[x>value] or x[:,x>value], but does not work. I need to have values just higher than zero and then pass it to the backward.()
this ( Border22[Border22>0]) give me again 0 value too.

        Border=torch.ones(fake.shape)
        Border[:,:,4-2:4+3,4-2:4+3]=0
        Border22=torch.mul(Border,fake).view(-1)
        Border22[Border22>0]
        Ones=torch.ones(1)
        loss2=criterion2()(Border22,Ones)
       loss2.bachward.()

It seems you are missing the assignment of the indexing operation:

Border22 = Border22[Border22>0]

Hi Ptrblck,

I change my cod ein different ways to solve the error. The current code is as follow, but it gave me zero gradients. Would you please help me with that? L1 is 0.8


        Thresholddefalut=nn.Threshold(0.98,0)
                                
        bbb=fake.squeeze(1)
        bbb1=MASKGaussy.squeeze(1)

        zzz=Thresholddefalut(bbb)
        zzz1=Thresholddefalut(bbb1)

        L1=nn.L1Loss()(zzz,zzz1)

        loss2=L1
            
        loss2.backward()
        print(netG.l3[0].weight.grad)
## -----------------

        class Generator994(nn.Module):
    def __init__(self,ngpu,nz,ngf):
        super(Generator994, self).__init__()
        self.ngpu=ngpu
        self.nz=nz
        self.ngf=ngf
        self.l1= nn.Sequential(

            nn.ConvTranspose2d(self.nz, self.ngf * 8, 3, 1, 0, bias=False),
            nn.BatchNorm2d(self.ngf * 8),
            nn.ReLU(True),)
 
        self.l2=nn.Sequential(nn.ConvTranspose2d(self.ngf * 8, self.ngf * 4, 3, 1, 0, bias=False),
            nn.BatchNorm2d(self.ngf * 4),
            nn.ReLU(True),)

        self.l3=nn.Sequential(nn.ConvTranspose2d( self.ngf * 4, self.ngf * 2, 3, 1, 0, bias=False),
            nn.BatchNorm2d(self.ngf * 2),
             nn.ReLU(True),)
6
        self.l4=nn.Sequential(nn.ConvTranspose2d( self.ngf*2, 1, 3, 1, 0, bias=False),nn.Sigmoid()

        )

    def forward(self, input):
        out=self.l1(input)
        out=self.l2(out)
        out=self.l3(out)
        out=self.l4(out)

        return out

I would appreciate your response

Your current code snippet uses undefined methods, so I cannot debug it.
As usual, could you post an executable code snippet, which would reproduce this issue?

Hi Ptrblck,

I solved the problem of this code. Now I have another , never finish. I need to do binary thresholding for x>value =1 and x<value=0, but in the way that after thresholding I can pass it to the backward.().
For example for the above code zzz and zzz1 be the binary maps and use in L1 loss. How define zzz and zzz1?

Based on your description it seems you want to use a step function, which would have a zero gradient everywhere (except exactly as x==value), so it won’t be useful for your loss and gradient calculation.
If this would work, you could directly optimize the model predictions without e.g. using sigmoid or softmax.