RuntimeError: CUDA error: device-side assert triggered in using transformer_encoder

hi, I have a problem.
I want to try deep learning with Transformer_encoder. But I got error message like below.

RuntimeError: CUDA error: device-side assert triggered

I want to have any suggestions.
I’m putting code here.

class Encoder(nn.Module):
    def __init__(self, num_embeddings, embedding_dim, num_layers):
        super().__init__()
        self.em1 = nn.Embedding(
            num_embeddings=num_embeddings, embedding_dim=embedding_dim
        )

        encoder_layer = nn.TransformerEncoderLayer(d_model=embedding_dim, nhead=5)
        self.transformer_encoder = nn.TransformerEncoder(
            encoder_layer, num_layers=num_layers
        )

    def forward(self, torch_comp, torch_comp_wt):
        ratio = torch_comp_wt
        sym = torch_comp

        desc = self.em1(torch_comp.to(torch.long))
        desc = self.transformer_encoder(desc) <---error occured here
        vec = torch.einsum("ij,ijk->ijk", ratio, desc)

        vec = torch.mean(vec, dim=-1)
        vec, _ = torch.sort(vec, dim=1)
        vec = torch.cat((sym, ratio, vec), dim=1)

        return vec


class Net(nn.Module):
    def __init__(self, input_dim, out_dim, hidden_dim, num_embeddings, embedding_dim, num_layers):
        super().__init__()
        self.embed = Encoder(num_embeddings, embedding_dim, num_layers)

        self.share_block = nn.Sequential(
            # nn.BatchNorm1d(input_dim),
            nn.Linear(input_dim, hidden_dim),
            nn.ReLU(),
            nn.Linear(hidden_dim, hidden_dim),
            nn.ReLU(),
        )
        self.head = nn.Sequential(
            nn.Dropout(p=0.1),
            nn.Linear(hidden_dim, int(hidden_dim / 2)),
            nn.ReLU(),
            nn.Dropout(p=0.1),
            nn.Linear(int(hidden_dim / 2), out_dim),
        )

    def forward(self, torch_comp, torch_comp_wt):
        x = self.embed(torch_comp, torch_comp_wt)
        x = self.share_block(x)
        _out = self.head(x)

        return _out

Could you rerun the code with CUDA_LAUNCH_BLOCKING=1 python script.py args and check the stacktrace, please?
Often a device assert is triggered e.g. when an invalid indexing operation is used.

Should os.environ['CUDA_LAUNCH_BLOCKING'] = 1 be os.environ['CUDA_LAUNCH_BLOCKING'] = '1'?

os.environ['CUDA_LAUNCH_BLOCKING'] = 1 will raise the exception:

TypeError: str expected, not int

when I use os.environ['CUDA_LAUNCH_BLOCKING'] = '1', i got error message below, and error is occured in
elem_desc = self.em1(torch_comp.to(torch.long)).

error message
RuntimeError: CUDA error: device-side assert triggered

/opt/conda/conda-bld/pytorch_1646755953518/work/aten/src/ATen/native/cuda/Indexing.cu:703: indexSelectLargeIndex: block: [86,0,0], thread: [32,0,0] Assertion srcIndex < srcSelectDimSize failed.

My guess might be right and the indexing in the embedding layer self.em1 fails so make sure the input contains valid values in [0, num_embeddings-1].

I made a mistake about setting num_embeddings !
Thank you for your helping !!