RuntimeError: Input type (double) and bias type (float) should be the same

Hi, I’m currently using uconvlstm and when trying to train the model I get the following error:
Input type (double) and bias type (fload) should be the same

I have searched for similar issues but not sure where to start.

Full stack issue below


EPOCH 1/10
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-16-e53f2e9fca6d> in <module>
      6 
      7     model.train()
----> 8     train_metrics = iterate(
      9         model,
     10         data_loader=train_loader,

11 frames
<ipython-input-11-95ffd484195c> in iterate(model, data_loader, criterion, optimizer, mode, device)
     60       elif mode != 'val':
     61           optimizer.zero_grad()
---> 62           out = model(x, batch_positions=dates)
     63           val_accuracy.append(get_accuracy(out, y))
     64       else:

/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1188         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1189                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190             return forward_call(*input, **kwargs)
   1191         # Do not call functions when jit is used
   1192         full_backward_hooks, non_full_backward_hooks = [], []

<ipython-input-3-603af17d624d> in forward(self, input, batch_positions)
    369         )  # BxT pad mask
    370 
--> 371         out = self.in_conv.smart_forward(input)
    372 
    373         feature_maps = [out]

<ipython-input-3-603af17d624d> in smart_forward(self, input)
     32                         * self.pad_value
     33                     )
---> 34                     temp[~pad_mask] = self.forward(out[~pad_mask])
     35                     out = temp
     36                 else:

<ipython-input-3-603af17d624d> in forward(self, input)
    110 
    111     def forward(self, input):
--> 112         return self.conv(input)
    113 
    114 

/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1188         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1189                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190             return forward_call(*input, **kwargs)
   1191         # Do not call functions when jit is used
   1192         full_backward_hooks, non_full_backward_hooks = [], []

<ipython-input-3-603af17d624d> in forward(self, input)
     89 
     90     def forward(self, input):
---> 91         return self.conv(input)
     92 
     93 

/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1188         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1189                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190             return forward_call(*input, **kwargs)
   1191         # Do not call functions when jit is used
   1192         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py in forward(self, input)
    202     def forward(self, input):
    203         for module in self:
--> 204             input = module(input)
    205         return input
    206 

/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1188         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1189                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190             return forward_call(*input, **kwargs)
   1191         # Do not call functions when jit is used
   1192         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py in forward(self, input)
    461 
    462     def forward(self, input: Tensor) -> Tensor:
--> 463         return self._conv_forward(input, self.weight, self.bias)
    464 
    465 class Conv3d(_ConvNd):

/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
    454     def _conv_forward(self, input: Tensor, weight: Tensor, bias: Optional[Tensor]):
    455         if self.padding_mode != 'zeros':
--> 456             return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
    457                             weight, bias, self.stride,
    458                             _pair(0), self.dilation, self.groups)

RuntimeError: Input type (double) and bias type (float) should be the same`

Any help regarding this would be greatly appreciated.

1st order guess. Check the type of your input tensor/data.
If it is of double type, turn it into torch.float32 type tensor before feeding into model