Traceback (most recent call last):
File “new_main_grid16_v2.py”, line 2090, in
mlp_train(epoch)
File “new_main_grid16_v2.py”, line 241, in mlp_train
for batch_idx, (inputs, targets) in enumerate(mlp_train_loader):
File “/home/mhha/.conda/envs/pytorchmh/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 179, in next
batch = self.collate_fn([self.dataset[i] for i in indices])
File “/home/mhha/.conda/envs/pytorchmh/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 109, in default_collate
return [default_collate(samples) for samples in transposed]
File “/home/mhha/.conda/envs/pytorchmh/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 109, in
return [default_collate(samples) for samples in transposed]
File “/home/mhha/.conda/envs/pytorchmh/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 112, in default_collate
.format(type(batch[0]))))
TypeError: batch must contain tensors, numbers, dicts or lists; found <class ‘torch.autograd.variable.Variable’>
Make sure that mlp_train_inputs and mlp_train_targets are Tensors, i.e they weren’t wrapped with torch.autograd.Variable or produced by an operation using Variables.
Or if they are Variables and that has to be so, pass them as TensorDataset(mlp_train_inputs.data, mlp_train_targets.data)
That’s weird
I tried replicating that dataloader and it works fine with cuda.FloatTensor and cuda.IntTensor, it only raises Type Error when I create the dataset with Variables (as expected).
Do you perform some preprocessing on the Tensors? Some operation that may be changing your data into Variables? Did you check the data type just before creating the dataset?
Just to be sure, can you try:
mlp_train_dataset = torch.utils.data.TensorDataset(mlp_train_inputs.data, mlp_train_targets.data)
I preprocessed cifar100 images and put the values into torch.cuda.FloatTensor. The FloatTensor is mlp_train_inputs. There is nothing to do with this tensor.
Your recommendation raise runtime error like this:
Traceback (most recent call last):
File “new_main_grid16_v2.py”, line 222, in
mlp_train_dataset = torch.utils.data.TensorDataset(mlp_train_inputs.data, mlp_train_targets.data)
File “/home/mhha/.conda/envs/pytorchmh/lib/python3.5/site-packages/torch/tensor.py”, line 374, in data
raise RuntimeError('cannot call .data on a torch.Tensor: did you intend to use autograd.
I’m not sure if nn.MaxPool2d would work but maybe it will.
However as a side note, why not apply max pooling to the regular dataset, i.e. use nn.MaxPool2d as the first layer of your NN
Can you show the result of those print statements?
You move the data to cuda but the labels remain in the CPU, it is not necessary to move the data to gpu here, since you do it later in the training loop. Furthermore, is this loop creating what you expect? It looks as if only the last batch would be stored.
I think I found the issue in that same loop. F.max_pool2d requires a Variable as input in v0.3 so it shouldn’t execute at all, so I assume you’re using v0.2, which accepts a Tensor as input but silently returns a Variable. This would cause mlp_train_inputs to be a Variable, which would cause the error in your dataloader, but since mlp_train_targets is not a Variable it raised a runtime error when you tried it with .data
So if this is indeed the problem replace