Expected object of scalar type double but got scalar type float for argument 'other'

I am working on an object detection task and I used a dataset where the boxes dimensions were provided as strings so I converted it to float using the following command.

df['x'] = pd.to_numeric(df['x'])
df['y'] = pd.to_numeric(df['y'])
df['w'] = pd.to_numeric(df['w'])
df['h'] = pd.to_numeric(df['h'])

Then I got the following error

Expected object of scalar type double but got scalar type float for argument 'other'

I understood the error but not able to correct it.
So I want to change the above rows to double.

1 Like

Hi,

You can use .to(torch.float) and .to(torch.double) to switch a Tensor between the two types.

Where exactly does the error come from? Can you share the full stack trace?

1 Like

This is what I am getting

So I think that the problem lies with the datatype of the boxes

Looks like so. Can you add prints there to make sure that the targets you pass to your model have the right dtype?

Actually I changed the datatype but it doesn’t seem to work. So I printed the targets that I am getting from the dataloader. Any more suggestion??

Keep in mind that changing a Tensor datatype is not inplace, you need to do

your_tensor = your_tensor.to(torch.float)
1 Like

Sorry but your reply is not so clear to me.

@albanD I tried that , that is, using double tensor and it ran successfully on the train_set but giving the same error for the validation set which is being loaded from the same dataset class and I have also kept the model same for it.Actually I have written the train and validation set loss computation under the same function using the same model.

model = model.double()
def train(model, trainloader, validloader, optimizer, num_epochs=5):
    i=0;
    for epoch in tqdm(range(num_epochs)):
        i+=1
        train_loss_history = []
        valid_loss_history = []
        print("Epoch :", i)
        for images, targets, image_ids in tqdm(trainloader):
            images = list((torch.DoubleTensor(img.T)).to(device) for img in images)
            targets = [{k: v.to(device) for k, v in t.items()} for t in targets]

            train_loss_dict = model(images, targets)

            train_losses = sum(loss for loss in train_loss_dict.values())

            train_loss_history.append(train_losses)
            optimizer.zero_grad()
            train_losses.backward()
            optimizer.step()

        plt.plot(np.arange(len(train_loss_history)), train_loss_history)

        for images, targets, image_ids in tqdm(validloader):
            images = list((torch.DoubleTensor(img.T)).to(device) for img in images)
            targets = [{k: v.to(device) for k, v in t.items()} for t in targets]

            valid_loss_dict = model(images, targets)

            valid_losses = sum(loss for loss in valid_loss_dict.values())

            valid_loss_history.append(valid_losses)

        plt.plot(np.arange(len(valid_loss_history)), valid_loss_history)
        

    return model

If you want I can also share the my google colab link. Thanks in advance

Hi,

I’m not sure what the question here is? :slight_smile:

But in your code, why do you use torch.DoubleTensor() on the result of the dataloader? Aren’t these Tensors already?
If they are numpy arrays, you should use torch.from_numpy() to get Tensors out of them.

Actually they are numpy arrays only but I want to get double type input for double type model so I had done that

In that case,

torch.from_numpy(img.T).to(torch.double) would do what you want.

I tried this too but it didn’t seem to work. I am getting the same error as I had mentioned above

Hi,

It might be simpler if you could get a small code sample that we can use to reproduce this. That will allow us to help you better.