TypeError: only integer tensors of a single element can be converted to an index

I am facing the following error -

TypeError: only integer tensors of a single element can be converted to an index

In the following code. I am even doing a forceful conversion to int datatype and yet I am facing this problem. It occurs when I try to access specific embedding layers.

userEmbeddings = self.userEmbeds[userIndex]

The complete code is:

class EmbeddingModel(nn.Module):
    def __init__(self,userC,movieC,embedDim):
        super(EmbeddingModel, self).__init__()
        self.userEmbeds = nn.Embedding(userC, embedDim)
        self.movieEmbeds = nn.Embedding(movieC, embedDim)
        self.fc1 = nn.Linear(2*embedDim, embedDim)
        self.relu = nn.ReLU()
        self.fc2 = nn.Linear(embedDim, 5)
        
    def forward(self, userIndex, movieIndex):
        userIndex = userIndex.to(dtype = torch.int, device = device)
        movieIndex = movieIndex.to(dtype = torch.int, device = device)
        print(userIndex.type(),movieIndex.type())
        # ----- Output - torch.cuda.IntTensor torch.cuda.IntTensor
        # Error occurs in the following line --------
        userEmbeddings = self.userEmbeds[userIndex]
        movieEmbeddings = self.movieEmbeds[movieIndex]
        inp = torch.cat([userEmbeddings,movieEmbeddings],1)
        out = self.fc1(inp)
        out = self.relu(out)
        out = self.fc2(out)
        return out

The following is my calling code for the model -

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
for epoch in range(epochs):
    for i, (users,movies,ratings) in enumerate(train_loader):        
        if torch.cuda.is_available():
            users = Variable(users.cuda())
            movies = Variable(movies.cuda())
            ratings = Variable(ratings.cuda())
        else:
            users = Variable(users)
            movies = Variable(movies)
            ratings = Variable(ratings)
        outputs = model(users,movies)

How do I index my models embedding layer this way?

Could you try to pass the indices as LongTensors?
If they are not already in that type, just use .to(dtype=torch.long, device=device).

PS: Variables are deprecated since PyTorch 0.4. If you are using a newer version, you can just use tensors instead.

2 Likes

Thank you!
It works by making the change you asked along with changing

userEmbeddings = self.userEmbeds[userIndex]
movieEmbeddings = self.movieEmbeds[movieIndex]

To -->

userEmbeddings = self.userEmbeds(userIndex)
movieEmbeddings = self.movieEmbeds(movieIndex)
3 Likes

This answer really helps me a lot. It is important that we must use the bracket ‘()’ instead of the bracket ‘[]’.

Thanks!!

I am new to pytorch and I am having the same error posted here as you can see below:

File "/home/.../CenterFusion/src/lib/utils/pointcloud.py", line 282, in pc_dep_to_hm_torch
    bbox_int = torch.tensor([torch.floor(bbox[0]), 
TypeError: only integer tensors of a single element can be converted to an index

which is thrown by the following piece of code:

 bbox_int = torch.tensor([torch.floor(bbox[0]), 
                         torch.floor(bbox[1]), 
                         torch.ceil(bbox[2]), 
                         torch.ceil(bbox[3])], dtype=torch.int32)# format: xyxy

where:

type bbox: <class 'torch.Tensor'>

Can anyone please help me? I could not seem to apply the solution @ptrblck provided in his comment to my problem.

Thanks!

That’s a bit weird, since you are already passing an integer index.
Could you print(bbox) and post the output here, please?

Hi @ptrblck , thanks for your quick reply.

I solved my problem.

I think that the problem was a pytorch version issue. I am trying to use for what I do pytorch 1.9.1

I found out that between previous versions of torch and mine (pytorch 1.9.1) the functions
torch.floor() and torch.ceil() return different types. For pytorch 1.4.0 they returned an int while now they return a tensor.

Therefore, the solution to my problem was to change (in the file CenterFusion/src/lib/utils/pointcloud.py) the lines:

    bbox_int = torch.tensor([torch.floor(bbox[0]), 
                         torch.floor(bbox[1]), 
                         torch.ceil(bbox[2]), 
                         torch.ceil(bbox[3])], dtype=torch.int32)# format: xyxy

to

    bbox_int = torch.tensor([int(torch.floor(bbox[0])), 
                         int(torch.floor(bbox[1])), 
                         int(torch.ceil(bbox[2])), 
                         int(torch.ceil(bbox[3]))], dtype=torch.int32)# format: xyxy

Does it sound reasonable?
Thanks for the help!

I printed what you asked too:

type bbox:  <class 'torch.Tensor'>
bbox[0]:  tensor(137.9395, device='cuda:0')
type(bbox[0]):  <class 'torch.Tensor'>

Hmm, that’s strange. Let me check, if this is expected behavior:

torch.tensor([torch.tensor(0.), torch.tensor(1.)]) # works
torch.tensor([torch.tensor(0), torch.tensor(1)], dtype=torch.int32) # works
torch.tensor([torch.tensor(0.), torch.tensor(1.)], dtype=torch.int32) # fails
# > TypeError: only integer tensors of a single element can be converted to an index
1 Like

Any update? I also have the same error


torch.tensor([torch.tensor(0.), torch.tensor(1.)]) # works
torch.tensor([torch.tensor(0), torch.tensor(1)], dtype=torch.int32) # works
torch.tensor([torch.tensor(0.), torch.tensor(1.)], dtype=torch.int32) # fails
# > TypeError: only integer tensors of a single element can be converted to an index
2 Likes

Hello, I might be having a related issue. I have a custom Dataset where __getitem__ can take a list / tensor of integers as the index and return arrays from the data.

Indexing the custom Dataset works as expected, e.g.

data = InteractionDataset(TRAIN_PATH, n_rows=200000)
data[torch.tensor([2, 1, 23])]
# works

However when I take subsets of the data (train / validation split), the same indexing gives me the error described above:

validation_size = int(0.2 * len(data))
dtrain, dval = torch.utils.data.random_split(data, [len(data) - validation_size, validation_size])
dval[torch.tensor([2, 1, 23])]
# fails with only integer tensors of a single element can be converted to an index

I’ve tried converting to a long tensor as well (torch.tensor([2, 1, 23], dtype=torch.long)) but get the same results. Do you have any idea what might be going on here?

Thank you!

1 Like

@tcuongd I have the same issue, have you solved it?

Hey, it’s been a while so I’ve forgotten exactly what I was using the code for, but it seems like I pinned the problem down to the random_split() function and avoided using it altogether. So for training and validation I:

  • Manually created two List[int] (one for training and one for validation).
  • Extracted the validation data using the Dataset indexing which worked fine.
  • Within each epoch, used a custom sampler to create and access the training batches, e.g.
for j in range(epochs):
        train_loader = torch.utils.data.DataLoader(
            data, batch_size=None, sampler=ArrayBatchSampler(data, batch_size, train_indices)
        )

Thanks for your response.

I’m getting the same error, but I’m not doing indexing: The line that throws this error is just a series of additions and multiplications between tensors:

return out_masked + (out - out_masked) * embedding_scale

and yet the error at this line reads

TypeError: only integer tensors of a single element can be converted to an index

I’m not sure how to debug this, given the difference between my code and error message.

@ptrblck Any suggestions? Thanks.


UPDATE: Turns out that embedding_scale was not the single number it was supposed to be, but rather a list (“[4]”). So apparently multiplying by a list was getting regarded as indexing? Interesting.

One can reproduce this error with the simple code

torch.tensor([1,2,3])*[4]

I wonder if there might be a more appropriate error message in this case, such as “Cannot multiply tensor by list”?

To fix the error, one can either unwrap the [4] or wrap it in torch.Tensor: both
torch.tensor([1,2,3])*[4][0] and torch.tensor([1,2,3])*torch.tensor([4]) work fine.

The error is indeed not really helpful. Would you mind creating an issue on GitHub so that we could track and improve it, please?

Sure! Would that be a “Bug Report” or a “Feature Request”?

I think you could start with a featue request and someone could edit it if needed. I’m not in front of my workstation to check if this is a regression or if your code snippet was already failing in older releases.