The problem of Variable()

#train the model
for epoch in range(2):
    for i, (images, labels) in enumerate(train_loader):
        print(type(images))
        images = Variable(images)
        labels = Variable(labels)
        print(type(images))
        # Forward + Backward + Optimize
        optimizer.zero_grad()
        outputs = cnn(images)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        
        if (i+1) % 100 == 0:
            print(loss.data)
            print ('Epoch [%d/%d], Iter [%d/%d] Loss: %.4f' 
                   %(epoch+1, 2, i+1, len(train_dataset)//BATCH_SIZE, loss.data[0]))

ERROR:

<class 'torch.LongTensor'>
<class 'torch.autograd.variable.Variable'>
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-26-5427cb169c61> in <module>()
      8         # Forward + Backward + Optimize
      9         optimizer.zero_grad()
---> 10         outputs = cnn(images)
     11         loss = criterion(outputs, labels)
     12         loss.backward()

/home/quoniammm/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    222         for hook in self._forward_pre_hooks.values():
    223             hook(self, input)
--> 224         result = self.forward(*input, **kwargs)
    225         for hook in self._forward_hooks.values():
    226             hook_result = hook(self, input, result)

<ipython-input-19-8341c87faa62> in forward(self, x)
     14 
     15     def forward(self, x):
---> 16         x = F.relu(self.conv1(x))
     17         x = F.max_pool2d(F.relu(self.conv2(x)), 2)
     18 

/home/quoniammm/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    222         for hook in self._forward_pre_hooks.values():
    223             hook(self, input)
--> 224         result = self.forward(*input, **kwargs)
    225         for hook in self._forward_hooks.values():
    226             hook_result = hook(self, input, result)

/home/quoniammm/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input)
    252     def forward(self, input):
    253         return F.conv2d(input, self.weight, self.bias, self.stride,
--> 254                         self.padding, self.dilation, self.groups)
    255 
    256 

/home/quoniammm/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in conv2d(input, weight, bias, stride, padding, dilation, groups)
     50     f = ConvNd(_pair(stride), _pair(padding), _pair(dilation), False,
     51                _pair(0), groups, torch.backends.cudnn.benchmark, torch.backends.cudnn.enabled)
---> 52     return f(input, weight, bias)
     53 
     54 

RuntimeError: expected Long tensor (got Float tensor)

I have a question about type.

when I run Variable().Why the type of images is changed?

I can’t understand it.Who can tell me why?

Maybe one of the layers in your CNN does the conversion. Can you include the code for the CNN?

In the general case you can convert like so:

  if use_cuda:
        lgr.info ("Using the GPU")            
         Y = Variable(torch.from_numpy(y_data_np).type(torch.LongTensor).cuda())       
    else:
        lgr.info ("Using the CPU")        
         Y = Variable(torch.squeeze (torch.from_numpy(y_data_np).type(torch.LongTensor)))  #         

Also, BCEloss requires Floats in Y (e.g. targets) so maybe this is the case in your cost function too.

The problem is solved.But I am still a little confused.The way I uesd before is:

#convert to pytorch tensor
train_data = torch.from_numpy(train_data)
train_label = torch.from_numpy(train_label)
val_data = torch.from_numpy(valid_data)
val_label = torch.from_numpy(valid_label)

After reading your words, I changed to these:

#convert to pytorch tensor
train_data = torch.from_numpy(train_data)..type(torch.FloatTensor)
train_label = torch.from_numpy(train_label).type(torch.LongTensor)
val_data = torch.from_numpy(valid_data).type(torch.FloatTensor)
val_label = torch.from_numpy(valid_label).type(torch.LongTensor)

The problem is solved.But The error info is RuntimeError: expected Long tensor (got Float tensor).Should it be RuntimeError: expected Float tensor (got Long tensor)

This is so weired that I can not understand.What is the reason of it?

Please upload a full example to Git so that I can run it locally and understand what the problem is.

notebook

the data I use it’s kaggle mnist datasets.

I fixed your issue, see a new notebook here:

You needed:

train_data = np.array(train_data, dtype=np.float32)  
valid_data = np.array(valid_data, dtype=np.float32)

You now have a new error, but that would be easy to fix.

Best,

2 Likes

Thank you very much.The problem is sovled.

My pleasure, let me know if you need anything else.

1 Like