I have a dataset in numpy format (actually, I just modified the CIFAR-10/MNIST datasets given on PyTorch). The dimensions of it consist with the dimensions a normal CNN expects. For example, if I write:
print(our_dataset.shape, our_labels.shape)
I get:
(10000, 3, 32, 32) (10000,)
which is fine. Now I cast the data info torch format using:
train_data = torch.from_numpy(our_dataset)
our_labels = torch.from_numpy(our_labels)
encapsulate it into a TensorDataset:
train = torch.utils.data.TensorDataset(train_data, our_labels)
and finally into a DataLoader:
trainloader = torch.utils.data.DataLoader(train, batch_size=128, shuffle=True)
All fine here. Now I build the neural network, and then when I do the training, I get the error:
TypeError: DoubleSpatialConvolutionMM_updateOutput received an invalid combination of arguments - got (int, torch.DoubleTensor, torch.DoubleTensor, torch.FloatTensor, torch.FloatTensor, torch.DoubleTensor, torch.DoubleTensor, long, long, int, int, int, int), but expected (int state, torch.DoubleTensor input, torch.DoubleTensor output, torch.DoubleTensor weight, [torch.DoubleTensor bias or None], torch.DoubleTensor finput, torch.DoubleTensor fgradInput, int kW, int kH, int dW, int dH, int padW, int padH)
If I am working on cuda, then the error changes to:
_cudnn_convolution_full_forward received an invalid combination of arguments - got (torch.cuda.DoubleTensor, torch.cuda.FloatTensor, torch.cuda.FloatTensor, torch.cuda.DoubleTensor, tuple, tuple, int, bool), but expected (torch.cuda.RealTensor input, torch.cuda.RealTensor weight, torch.cuda.RealTensor bias, torch.cuda.RealTensor output, std::vector<int> pad, std::vector<int> stride, int groups, bool benchmark)
Anyone has seen these errors before? In addition, is the right way of using a new dataset by first casting it to torch format, then building a TensorDataset and finally a DataLoader?