Sorry, i need to do thresholding in the case that be usable for backward.()
I need to do the inverse of the case that defined in nn.Threshold
would you please help me with that. I write some code the thresholding is correct but not usable for backward.()
The posted operation would be similar to a relu (which zeroes out the values above a threshold), wouldn’t it? What issues are you seeing in the backward?
It works now. But how I can understand the generated gradient is correct. For example my result is not the thing that I expected but I am nt sure is teh problem of gradient or totally I should change the loss function. I used this code
zzz=torch.zeros(bbb.shape)
zzz1=torch.zeros(bbb1.shape)
for ii in range(bbb.shape[0]):
for ii1 in range(bbb.shape[1]):
for ii2 in range(bbb.shape[2]):
if bbb[ii,ii1,ii2]>=0.98:
zzz[ii,ii1,ii2]=1
elif bbb[ii,ii1,ii2]<0.98:
zzz[ii,ii1,ii2]=0
for ii in range(bbb1.shape[0]):
for ii1 in range(bbb1.shape[1]):
for ii2 in range(bbb1.shape[2]):
if bbb1[ii,ii1,ii2]>=0.98:
zzz1[ii,ii1,ii2]=1
elif bbb1[ii,ii1,ii2]<0.98:
zzz1[ii,ii1,ii2]=0
L1=criterion2(zzz,zzz1)
loss2=errG1+1*L1
loss2.backward()
It create gradient but I don’t know is it true or not