When making own dataset, problems happened

I created a dataset like this:

mlp_train_dataset = torch.utils.data.TensorDataset(mlp_train_inputs, mlp_train_targets)
mlp_train_loader = torch.utils.data.DataLoader(mlp_train_dataset, batch_size=args.bs)

Then I got this error

Traceback (most recent call last):
File “new_main_grid16_v2.py”, line 2090, in
mlp_train(epoch)
File “new_main_grid16_v2.py”, line 241, in mlp_train
for batch_idx, (inputs, targets) in enumerate(mlp_train_loader):
File “/home/mhha/.conda/envs/pytorchmh/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 179, in next
batch = self.collate_fn([self.dataset[i] for i in indices])
File “/home/mhha/.conda/envs/pytorchmh/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 109, in default_collate
return [default_collate(samples) for samples in transposed]
File “/home/mhha/.conda/envs/pytorchmh/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 109, in
return [default_collate(samples) for samples in transposed]
File “/home/mhha/.conda/envs/pytorchmh/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 112, in default_collate
.format(type(batch[0]))))
TypeError: batch must contain tensors, numbers, dicts or lists; found <class ‘torch.autograd.variable.Variable’>

Make sure that mlp_train_inputs and mlp_train_targets are Tensors, i.e they weren’t wrapped with torch.autograd.Variable or produced by an operation using Variables.
Or if they are Variables and that has to be so, pass them as TensorDataset(mlp_train_inputs.data, mlp_train_targets.data)

When I check mlp_train_inputs and mlp_train_targets, mlp_train_inputs is torch.cuda.FloatTensor and mlp_train_targets is torch.cuda.IntTensor.

That’s weird
I tried replicating that dataloader and it works fine with cuda.FloatTensor and cuda.IntTensor, it only raises Type Error when I create the dataset with Variables (as expected).
Do you perform some preprocessing on the Tensors? Some operation that may be changing your data into Variables? Did you check the data type just before creating the dataset?
Just to be sure, can you try:
mlp_train_dataset = torch.utils.data.TensorDataset(mlp_train_inputs.data, mlp_train_targets.data)

I preprocessed cifar100 images and put the values into torch.cuda.FloatTensor. The FloatTensor is mlp_train_inputs. There is nothing to do with this tensor.

Your recommendation raise runtime error like this:

Traceback (most recent call last):
File “new_main_grid16_v2.py”, line 222, in
mlp_train_dataset = torch.utils.data.TensorDataset(mlp_train_inputs.data, mlp_train_targets.data)
File “/home/mhha/.conda/envs/pytorchmh/lib/python3.5/site-packages/torch/tensor.py”, line 374, in data
raise RuntimeError('cannot call .data on a torch.Tensor: did you intend to use autograd.

Yeah, then those are indeed Tensors.
I have no idea what might be causing this :confused:

Is there any good example for making own dataset?

If you’re working with CIFAR pytorch has prebuilt datasets:
http://pytorch.org/docs/0.3.0/torchvision/datasets.html#cifar
And you can preprocess them with transforms, pytorch provides sevaral useful ones: http://pytorch.org/docs/0.3.0/torchvision/transforms.html or you could always build your own transforms

Also you can check out pytorch’s tutorial on building custom Datsets:
http://pytorch.org/tutorials/beginner/data_loading_tutorial.html#dataset-class
Which you can then wrap with DataLoader as any regular Dataset

What I want to do is to load the cifar10 prebuilt dataset, reduce the image size to torch.nn.functional.max_pool2d, and then make the dataset again.

To make target, read txt file and make it tensor.

If your targets are the standard CIFAR targets you could try something like this

transform = transforms.Compose(transforms.ToTensor(), nn.MaxPool2d())
cifar = torchvision.datasets.CIFAR100(path_to_CIFAR_data, transform=transform)
dataloader = torch.utils.data.DataLoader(cifar, batch_size=args.bs)

I’m not sure if nn.MaxPool2d would work but maybe it will.
However as a side note, why not apply max pooling to the regular dataset, i.e. use nn.MaxPool2d as the first layer of your NN

If they aren’t you can check the tutorial on custom datasets (Writing Custom Datasets, DataLoaders and Transforms — PyTorch Tutorials 2.1.1+cu121 documentation). In summary you inherit from torch.utils.data.Dataset and implement len, getitem and init to read and transform your data

I wanna run small MLP first, and then from MLP results I dynamically modify CNN parameters.

Here is my code.

Can you check this code?

if mlp:
    mlp_train_inputs = torch.cuda.FloatTensor(50000, 48)
    mlp_test_inputs = torch.cuda.FloatTensor(10000, 48)
    mlp_train_targets = torch.cuda.LongTensor(50000)
    mlp_test_targets = torch.cuda.LongTensor(10000)

    for batch_idx, (inputs, targets) in enumerate(train_loader2):
        mlp_train_inputs = F.max_pool2d(inputs, kernel_size=8, stride=8).view(F.max_pool2d(inputs, kernel_size=8, stride=8).size(0),-1).cuda()

    for batch_idx, (inputs, targets) in enumerate(test_loader2):
        mlp_test_inputs = F.max_pool2d(inputs, kernel_size=8, stride=8).view(F.max_pool2d(inputs, kernel_size=8, stride=8).size(0),-1).cuda()

    # print(mlp_train_inputs)

    f_small_train = open("20171130_small_MLP_train_targets.txt", "r")
    f_small_test = open("20171130_small_MLP_test_targets.txt", "r")

    for i in range(0,50000):
        mlp_train_targets[i] = int(f_small_train.readline())
    f_small_train.close()

    print(mlp_train_inputs)
    print(mlp_train_targets)

    for i in range(0,10000):
        mlp_test_targets[i] = int(f_small_test.readline())
    f_small_test.close()

    # print(mlp_test_targets)

    mlp_train_dataset = torch.utils.data.TensorDataset(mlp_train_inputs, mlp_train_targets)
    # mlp_test_dataset = torch.utils.data.TensorDataset(mlp_test_inputs, mlp_test_targets)

    #print(mlp_train_dataset)

    mlp_train_loader = torch.utils.data.DataLoader(mlp_train_dataset, batch_size=10000, drop_last=False)
    # mlp_test_loader = torch.utils.data.DataLoader(mlp_test_dataset, batch_size=10000, drop_last=False)

    # print(mlp_train_loader)
    # print(enumerate(mlp_train_loader))

# Small MLP Training
def mlp_train(epoch):
    print('\nEpoch: %d' % epoch)
    net2.train()
    train_loss = 0
    correct = 0
    total = 0

    for batch_idx, (inputs, targets) in enumerate(mlp_train_loader):
        if use_cuda:
            inputs, targets = inputs.cuda(), targets.cuda()
        optimizer2.zero_grad()
        inputs, targets = Variable(inputs), Variable(targets)
        outputs = net2(inputs)
        loss = criterion(outputs, targets)
        loss.backward()
        optimizer2.step()

        train_loss += loss.data[0]
        _, predicted = torch.max(outputs.data, 1)
        total += targets.size(0)
        correct += predicted.eq(targets.data).cpu().sum()

        progress_bar(batch_idx, len(mlp_train_loader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'
            % (train_loss/(batch_idx+1), 100.*correct/total, correct, total))

if mlp:
    if mlptr:
        for epoch in range(start_epoch, start_epoch + num_epoch):
            mlp_train(epoch)

Can you show me the declaration of train_loader2?

transform_train2 = transforms.Compose([transforms.ToTensor(),
                                      transforms.Normalize(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225])])
transform_test = transforms.Compose([transforms.ToTensor(),
                                     transforms.Normalize(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225])])

cifar_train2 = dset.CIFAR100("./", train=True, transform=transform_train2, target_transform=None, download=True)
cifar_test = dset.CIFAR100("./", train=False, transform=transform_test, target_transform=None, download=True)

train_loader2 = torch.utils.data.DataLoader(cifar_train2,batch_size=50000, shuffle=False,num_workers=2,drop_last=False)
test_loader2 = torch.utils.data.DataLoader(cifar_test,batch_size=10000, shuffle=False,num_workers=2,drop_last=False)

Can you show the result of those print statements?

You move the data to cuda but the labels remain in the CPU, it is not necessary to move the data to gpu here, since you do it later in the training loop. Furthermore, is this loop creating what you expect? It looks as if only the last batch would be stored.

I think I found the issue in that same loop. F.max_pool2d requires a Variable as input in v0.3 so it shouldn’t execute at all, so I assume you’re using v0.2, which accepts a Tensor as input but silently returns a Variable. This would cause mlp_train_inputs to be a Variable, which would cause the error in your dataloader, but since mlp_train_targets is not a Variable it raised a runtime error when you tried it with .data
So if this is indeed the problem replace

with

mlp_train_dataset = torch.utils.data.TensorDataset(mlp_train_inputs.data, mlp_train_targets)

Hopefully this will solve the problem

Here is the results of print

Variable containing:
2.2489 2.2489 2.2489 … 0.5136 0.5136 0.2173
2.2489 2.2489 2.2489 … 2.6400 2.6400 2.6400
2.1975 2.1633 2.1804 … 1.8557 1.7860 2.1694
… ⋱ …
2.1290 1.9749 1.9235 … 1.3328 1.3677 1.7685
0.5536 0.5364 0.4851 … 0.0605 0.3393 0.3916
-1.5699 -1.5870 -1.5699 … 0.1128 0.0082 0.1302
[torch.cuda.FloatTensor of size 50000x48 (GPU 0)]

0
1
1

1
1
1
[torch.cuda.LongTensor of size 50000 (GPU 0)]

Yeah F.max_pool2d is indeed the culprit here. It is returning a Variable.

Yes!!

I did what you recommend. It works!