[solved] Creating MTGP constants failed error

I am getting the following error when I tried to work with Dataset and DataLoader.

/pytorch/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [95,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [95,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
THCudaCheck FAIL file=/pytorch/torch/lib/THC/THCCachingHostAllocator.cpp line=258 error=59 : device-side assert triggered

RuntimeError: Creating MTGP constants failed. at /pytorch/torch/lib/THC/THCTensorRandom.cu:33

What does this error mean? I am not using DataParallel.

How did you solve this issue? I am having the same trouble.

1 Like

@nok Did you ever figure this out? Also having the same issue

If anyone else stumbles on this issue. I obtained the same sort of problem when using an embedding layer and passing out of range indexes (indexes > num_embeddings). This rather cryptic error was pretty unhelpful to debug but hopefully this helps someone else.

8 Likes

Agree with @gautierdag, I used torch.LongTensor but forgot to set all values to 0. After shifting to torch.zeros, the problem is gone.