Illegal memory access caused by clamp?

(Wei Wang) #1

I have created my own function with forward and backward. The msg says the error occurs in the backward pass with clamp operation. Is tensor.clamp(min=1.e-5) operation forbidden in the backward pass?

THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1549635019666/work/aten/src/THC/generated/…/THCReduceAll.cuh line=317 error=77 : an illegal memory access was encountered
Traceback (most recent call last):
File “main.py”, line 347, in
train(epoch)
File “main.py”, line 209, in train
loss.backward()
File “/home/anaconda3/lib/python3.7/site-packages/torch/tensor.py”, line 102, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/home/anaconda3/lib/python3.7/site-packages/torch/autograd/init.py”, line 90, in backward
allow_unreachable=True) # allow_unreachable flag
File “/home/anaconda3/lib/python3.7/site-packages/torch/autograd/function.py”, line 76, in apply
return self._forward_cls.backward(self, *args)
File “/home/pycharm/pytorch-cifar/torch_utils.py”, line 96, in backward
denominator = torch.norm(M.mm(v_k)).clamp(min=1.e-5)
File “/home/anaconda3/lib/python3.7/site-packages/torch/functional.py”, line 713, in norm
return torch._C._VariableFunctions.frobenius_norm(input)
RuntimeError: cuda runtime error (77) : an illegal memory access was encountered at /opt/conda/conda-bld/pytorch_1549635019666/work/aten/src/THC/generated/…/THCReduceAll.cuh:317

(Thomas V) #2

Most likely, you’re seeing an earlier error. Use blocking launches to get the location of the error.

Best regards

Thomas

1 Like