Masking/Threshold Tensor

I have a tensor(A) of the Shape-> [1,3,13,13]
Another tensor(B) of the shape-> [3]
And a Threshold Value-> 0.5

I want to iterate over the -> X dimension of ->[1,X, 13, 13]
Eg. B= [0.1, 0.3, 0.6]

For each index of B[index], If its value > threshold make that [1,index, 13, 13] -> ZERO.

I tried with-> A(dim1, dim2> threshold, dim3, dim4) = 0
But it gives error-> [Index out of Bounds]

Any other way…

This code should work if I understand your issue correctly:

A = torch.randn(1, 3, 13, 13)
B = torch.randn(3)
thresh = 0.0
A[:, B > thresh] = 0

Sorry, i need to do thresholding in the case that be usable for backward.() :slight_smile:
I need to do the inverse of the case that defined in nn.Threshold
would you please help me with that. I write some code the thresholding is correct but not usable for backward.()

The posted operation would be similar to a relu (which zeroes out the values above a threshold), wouldn’t it? What issues are you seeing in the backward?

It works now. But how I can understand the generated gradient is correct. For example my result is not the thing that I expected but I am nt sure is teh problem of gradient or totally I should change the loss function. I used this code

      
        zzz=torch.zeros(bbb.shape)
        zzz1=torch.zeros(bbb1.shape)
        
        for ii in range(bbb.shape[0]):
             for ii1 in range(bbb.shape[1]):
                  for ii2 in range(bbb.shape[2]):
                            if bbb[ii,ii1,ii2]>=0.98:
                                zzz[ii,ii1,ii2]=1
                            elif  bbb[ii,ii1,ii2]<0.98:
                                zzz[ii,ii1,ii2]=0
                
        for ii in range(bbb1.shape[0]):
            for ii1 in range(bbb1.shape[1]):
                for ii2 in range(bbb1.shape[2]):
                        if bbb1[ii,ii1,ii2]>=0.98:
                           zzz1[ii,ii1,ii2]=1
                        elif  bbb1[ii,ii1,ii2]<0.98:
                            zzz1[ii,ii1,ii2]=0

        L1=criterion2(zzz,zzz1)
        loss2=errG1+1*L1
        loss2.backward()

It create gradient but I don’t know is it true or not :frowning: