When I do
data = data.to(device), where
data is initially from a DataLoader, I am getting the following error:
0%| | 0/5081 [00:00<?, ?it/s]Traceback (most recent call last): File "/n/app/python/3.7.4/lib/python3.7/multiprocessing/queues.py", line 236, in _feed obj = _ForkingPickler.dumps(obj) File "/n/app/python/3.7.4/lib/python3.7/multiprocessing/reduction.py", line 51, in dumps cls(buf, protocol).dump(obj) File "/home/vym1/nn2/lib/python3.7/site-packages/torch/multiprocessing/reductions.py", line 134, in reduce_tensor raise RuntimeError("Cowardly refusing to serialize non-leaf tensor which requires_grad, " RuntimeError: Cowardly refusing to serialize non-leaf tensor which requires_grad, since autograd does not support crossing process boundaries. If you just want to transfer the data, call detach() on the tensor before serializing (e.g., putting it on the queue).
I’ve never see this before, and when running on a different dataset with a different model, it works fine. Does anyone know why this is occuring?