RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 'weight'

my code is

classes=["not a face","face"]
path = "F:/project/Database/sample1.jpg"
b=cv2.imread(path)
q=torch.from_numpy(b)
print(q.shape)
d=np.transpose(q.numpy(), (2, 0, 1))
print(d.shape)
print(type(d))
w=torch.from_numpy(d)
w = w.unsqueeze(0)
w= w.double()
print(type(w))
print(w.shape)
print(type(w))

def vis_model(net):
    was_training = net.training
    net.eval()
    with torch.no_grad():
        outp=net(w)
        pred=torch.max(outp,1)

Basically I have made a model now I want to give a single image as an input to the model and get the prediction…

3 Likes

A fix would be to call .double() for convert to 64bit float.

4 Likes

I have done that still I am getting that error…

1 Like

You should do that for your inputs and model also.

1 Like

Which line triggers that error? It seems the error is in the model itself, which is the only source code not available in the example.

As @Kushaj stated, try converting parameters to float or your input image to double (either).

But without reproducible code there is only so much we can do to help…

1 Like

Here is the code for model… And it would be really helpful if you give a little information on my error.

class CNN(nn.Module):
    
    def __init__(self, out_1=13, out_2=32):
        super(CNN, self).__init__()
        self.cnn1 = nn.Conv2d(in_channels=3, out_channels=out_1, kernel_size=3, padding=1)
        self.relu1 = nn.ReLU()
        self.maxpool1 = nn.MaxPool2d(kernel_size=2)
        self.cnn2 = nn.Conv2d(in_channels=out_1, out_channels=out_2, kernel_size=5, stride=1, padding=0)
        self.relu2 = nn.ReLU()
        self.maxpool2 = nn.MaxPool2d(kernel_size=2)
        self.fc1 = nn.Linear(out_2 * 23 * 23, 2)
    
    def forward(self, x):
        print(x.shape)
        out = self.cnn1(x)
        print(out.shape)
        out = self.relu1(out)
        out = self.maxpool1(out)
        out = self.cnn2(out)
        out = self.relu2(out)
        out = self.maxpool2(out)
        #print(out.shape)
        out = out.view(out.size(0), -1)
        out = self.fc1(out)
        return out
    
    def activations(self, x):
        z1 = self.cnn1(x)
        a1 = self.relu1(z1)
        out = self.maxpool1(a1)
        
        z2 = self.cnn2(out)
        a2 = self.relu2(z2)
        out = self.maxpool2(a2)
        out = out.view(out.size(0),-1)
        return z1, a1, z2, a2, out

If required this is the method by which I have trained the model…

n_epochs=3
loss_list=[]
accuracy_list=[]
N_test=len(validation_dataset)

def train_model(n_epochs):
    for epoch in range(n_epochs):
        for x, y in train_loader:
            optimizer.zero_grad()
            z = net(x)
            loss = criterion(z, y)
            loss.backward()
            optimizer.step()

        correct=0
        #perform a prediction on the validation  data  
        for x_test, y_test in validation_loader:
            z = net(x_test)
            _, yhat = torch.max(z.data, 1)
            correct += (yhat == y_test).sum().item()
        accuracy = correct / N_test
        accuracy_list.append(accuracy)
        loss_list.append(loss.data)
train_model(n_epochs)

Additional Information
2 classes each having 624 images and shape 3x100x100

1 Like

Which line exactly throws the runtimeerror?

This is the complete error message. When I call the function I get error.

RuntimeError                              Traceback (most recent call last)
<ipython-input-13-1ad2836344b5> in <module>()
----> 1 vis_model(net)

<ipython-input-12-d875cfa1fa86> in vis_model(net)
     18     net.eval()
     19     with torch.no_grad():
---> 20         outp=net(w)
     21         pred=torch.max(outp,1)
     22 

C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    487             result = self._slow_forward(*input, **kwargs)
    488         else:
--> 489             result = self.forward(*input, **kwargs)
    490         for hook in self._forward_hooks.values():
    491             hook_result = hook(self, input, result)

<ipython-input-4-ce890d3bdcf7> in forward(self, x)
     13     def forward(self, x):
     14         print(x.shape)
---> 15         out = self.cnn1(x)
     16         print(out.shape)
     17         out = self.relu1(out)

C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    487             result = self._slow_forward(*input, **kwargs)
    488         else:
--> 489             result = self.forward(*input, **kwargs)
    490         for hook in self._forward_hooks.values():
    491             hook_result = hook(self, input, result)

C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\conv.py in forward(self, input)
    318     def forward(self, input):
    319         return F.conv2d(input, self.weight, self.bias, self.stride,
--> 320                         self.padding, self.dilation, self.groups)
    321 
    322 

RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 'weight'

Can you run, before you enter the training loop:

net = net.float()

It will transform the model parameters to float.

And then in your training loop:

z = net(x.float())

That should proceed without error.

PS: replace .float() by .double() if you wish to have network + data in double precision format.

35 Likes

This worked… Thanks

I got the same error:

RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 ‘weight’

so I’m wondering why that error would happended?
and why it still wrong when I have converted the dataset to double?

why I must convert the model to double?

Both, the data and model parameters, should have the same dtype.
If you’ve converted your data to double, you would have to do the same for your model.

5 Likes

Hi I have a same issue, however I couldn’t fix it. would you please support? Error is coming from last line.

np_data = genfromtxt('Top10_data.csv', delimiter=',', dtype='complex', skip_header=0)
inputs_T = np_data[:, 0:20].real
targets_T = np_data[:, 20:22].real

inputs = torch.from_numpy(inputs_T)
targets = torch.from_numpy(targets_T)


train_ds = TensorDataset(inputs, targets)



batch_size = 500
train_dl = DataLoader(train_ds, batch_size, shuffle=True)


class SimpleNet(nn.Module):
   # Initialize the layers
   def __init__(self):
      super().__init__()
      self.linear1 = nn.Linear(20, 20)
      self.act1 = nn.ReLU()  # Activation function
      self.linear2 = nn.Linear(20, 2)

# Perform the computation
def forward(self, x):
    x = self.linear1(x)
    x = self.act1(x)
    x = self.linear2(x)
    return x


model = SimpleNet()

opt = torch.optim.SGD(model.parameters(), 1e-5)

loss_fn = F.mse_loss


def fit(num_epochs, model, loss_fn, opt):
   for epoch in range(num_epochs):
       for xb, yb in train_dl:
          # Generate predictions
          xb = Variable(xb.float(), requires_grad=False)
          yb = Variable(yb.float(), requires_grad=False)
          pred = model(xb)
          loss = loss_fn(pred, yb)
          # Perform gradient descent
          loss.backward()
          opt.step()
          opt.zero_grad()
     print('Training loss: ', loss_fn(model(inputs), targets))


fit(100, model, loss_fn, opt)

numpy uses float64 as their default type, so call float() on these tensors before passing them to the TensorDataset:

inputs = torch.from_numpy(inputs_T),float()
targets = torch.from_numpy(targets_T).float()

(or cast them using numpy’s astype before).

15 Likes

Thank you so much. It works. Great !

That’s worked. Thank you…

2 Likes

This worked for me as well!! Thanks

Thanks, That’s work… I have the same problem

Dear all,

I have similar issue and wish to get some help from you guys.
class VehicleDataset(Dataset):

def __init__(self, small_sequences):
    self.sequences = small_sequences

def __len__(self):
    return len(self.sequences)

def __getitem__(self, idx):
    sequence, cluster_label = self.sequences[idx]

    return dict(
        sequence=torch.transpose(torch.Tensor(sequence.to_numpy()), 0, 1),
        cluster_label=torch.tensor(cluster_label).long()
    )

class VehicleDataModule(pl.LightningDataModule):
def init(self, train_sequences, test_sequences, batch_size=8):
super().init()
self.train_sequences = train_sequences
self.test_sequences = test_sequences
self.batch_size = batch_size

def setup(self):
    self.train_dataset = VehicleDataset(self.train_sequences)
    self.test_dataset = VehicleDataset(self.test_sequences)

def train_dataloader(self):
    return DataLoader(
        self.train_dataset,
        batch_size=self.batch_size,
        shuffle=True,
        num_workers=2
    )

def val_dataloader(self):
    return DataLoader(
        self.test_dataset,
        batch_size=1,
        shuffle=False,
        num_workers=1
    )

def test_dataloader(self):
    return DataLoader(
        self.test_dataset,
        batch_size=1,
        shuffle=False,
        num_workers=1
    )

N_EPOCHS = 50
BATCH_SIZE = 64

class TCNModel(nn.Module):
def init(self, num_inputs, n_classes, num_channels, kernel_size=3, dropout=0.3):
super(TCNModel, self).init()
self.tcn = TemporalConvNet(
num_inputs, num_channels, kernel_size=kernel_size, dropout=dropout
)
self.linear = nn.Linear(num_channels[-1], n_classes)

def forward(self, x):
    y1 = self.tcn(x)
    out = self.linear(y1[:, :, -1])
    return out

Define Lightning Module

class Classification(pl.LightningModule):

def __init__(self, num_inputs: int, n_classes: int, num_channels):
    super().__init__()
    self.model = TCNModel(num_inputs, n_classes, num_channels)
    self.criterion = nn.CrossEntropyLoss()

def forward(self, x, cluster_labels=None):
    output = self.model(x)
    loss = 0
    if cluster_labels is not None:
        loss = self.criterion(output, cluster_labels)
    return loss, output

def training_step(self, batch, batch_idx):
    sequences = batch["sequence"]
    cluster_labels = batch["cluster_label"]

    loss, outputs = self(sequences, cluster_labels)
    predictions = torch.argmax(outputs, dim=1)
    step_accuracy = accuracy(predictions, cluster_labels)
    self.log("train_loss", loss, prog_bar=True, logger=True)
    self.log("train_accuracy", step_accuracy, prog_bar=True, logger=True)
    # in training step
    self.logger.experiment.add_scalars("losses", {"train_loss": loss}, global_step=self.current_epoch)
    return {"loss": loss, "accuracy": step_accuracy}

def validation_step(self, batch, batch_idx):
    sequences = batch["sequence"]
    cluster_labels = batch["cluster_label"]

    loss, outputs = self(sequences, cluster_labels)
    predictions = torch.argmax(outputs, dim=1)
    step_accuracy = accuracy(predictions, cluster_labels)
    self.log("val_loss", loss, prog_bar=True, logger=True)
    self.log("val_accuracy", step_accuracy, prog_bar=True, logger=True)
    # in validation step
    self.logger.experiment.add_scalars("losses", {"val_loss": loss}, global_step=self.current_epoch)
    return {"loss": loss, "accuracy": step_accuracy}

def test_step(self, batch, batch_idx):
    sequences = batch["sequence"]
    cluster_labels = batch["cluster_label"]

    loss, outputs = self(sequences, cluster_labels)
    predictions = torch.argmax(outputs, dim=1)
    step_accuracy = accuracy(predictions, cluster_labels)
    self.log("test_loss", loss, prog_bar=True, logger=True)
    self.log("test_accuracy", step_accuracy, prog_bar=True, logger=True)
    return {"loss": loss, "accuracy": step_accuracy}

def configure_optimizers(self):
    return optim.Adam(self.model.parameters(), lr=0.001)

model = Classification(num_inputs = 3, n_classes = 4, num_channels=[128]*4)

The error happened in step accuracy

This error only happen when I set the default datatype at the beginning of the script.
torch.set_default_dtype(torch.float64)

thanks

The error is raised in the torchmetrics module, which seems to rely on float32 being the default type.
If you want to keep setting the default type to float64 you might need to explicitly cast the tensors to the expected type before passing them to torchmetrics.

1 Like