RuntimeError: Expected object of scalar type long int but got scalar type float for sequence element 1

Hello all, I am new to Pytorch and machine learning and recently I tried to run https://github.com/pclucas14/pixel-cnn-pp/blob/master/main.py with my own dataset.
My dataset is as following:

    def __init__(self, datapath, train=False, transform=None):
        self.data = []
        self.map_dic = {"-" : 0, "X" : 1, "S" : 2, "?" : 3, "Q" : 4, "E" : 5, "<" : 6, ">" : 7, "[" : 8, "]" : 9, "o" : 10, "B" : 11, "b" : 12}
        for file_name in os.listdir(datapath): 
            with open(base_path + "/" + file_name, 'r') as f:
               res = np.array( list( map( lambda l: [self.map_dic.get(c, c) for c in l.strip()], f.readlines() ) ) )
            self.data.append(res)
        self.transform = transform

    def __getitem__(self, index):
        img = self.data[index]

        img = torch.from_numpy(img)
        img = img.unsqueeze(0)

        # image transform:
        # img = Image.open('dataset_img/{}.png'.format(index)).convert('L')
        # if self.transform is not None:
        #     img = self.transform(img)

        return img,0
    
    def __len__(self):
        return len(self.data)

The code works fine originally when I load the image from file (using the comment part code) and transform with the following

    transforms.Resize(size=(16,224)),
    transforms.ToTensor()
    ])

The image file here is created bymatplotlib.pyplot.imsave('name1.png', data, cmap='gray') function from the numpy array(like this
[[0 0 0 … 0 0 0]
[0 0 0 … 0 0 0]
[0 0 0 … 0 0 0]

[1 1 1 … 1 1 1]
[1 1 1 … 1 1 1]
[1 1 1 … 1 1 1]]
)
Then I think that it is not necessary to convert numpy array to image and then to tensor. Instead I could convert directly from numpy array to tensor. After I use the above dataset, it constantly gives the error:

  File "maintxt1.py", line 241, in <module>
#in original code it is line 122
    output = model(input)
  File "/home/tangyeping/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/tangyeping/pixel-cnn-pp/model.py", line 119, in forward
    x = x if sample else torch.cat((x, self.init_padding), 1)
RuntimeError: Expected object of scalar type long int but got scalar type float for sequence element 1.

I have tried something like img = torch.from_numpy(img).long() or model.double() but this error keeps appering. Appreciate for any suggestion.

Firstly you have not applied the defined transformation.
apply the transformation.

second, if you don’t want to apply the transformation,
change the tensor type using the torch.from_numpy(img).float

show output if you still got any errors.

Thank you a lot! I did not use transform because now I am dealing with array and tensor instead of image. After I use torch.from_numpy(img).float() this error is going. However another error coming up:

loss : 6.1917, time : 0.4663
Traceback (most recent call last):
  File "maintxt1.py", line 240, in <module>
    output = model(input)
  File "/home/tangyeping/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/tangyeping/pixel-cnn-pp/model.py", line 119, in forward
    x = x if sample else torch.cat((x, self.init_padding), 1)
RuntimeError: Sizes of tensors must match except in dimension 3. Got 184 and 148

I have met this error before when using image object. And after I apply transforms.Resize(size=(16,224)) the error is gone. Is there anything similar I can do with tensor or numpy array to fix it? Or is there another source of error?

Okie.

Now as you have not used ToTensor().
you have to reshape the data before feeding to model.
input = input.permute(batch_size, input_channel, n,m)

Thank you for the advice. I thought what permute can do is switching the dimension of tensor. I try to print the size of input and I got

starting training
torch.Size([1, 1, 16, 184])
loss : 6.1917, time : 0.4552
torch.Size([1, 1, 16, 148])
Traceback (most recent call last):
  File "maintxt1.py", line 241, in <module>
    output = model(input)
  File "/home/tangyeping/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/tangyeping/pixel-cnn-pp/model.py", line 119, in forward
    x = x if sample else torch.cat((x, self.init_padding), 1)
RuntimeError: Sizes of tensors must match except in dimension 3. Got 184 and 148

I am not sure how I can use permute to reshape the data like what transform.Resize() does.

Cant see 184 in the input size, from where this came from.
The issue is with the size, please check forward function also .
Something is happening here
x = x if sample else torch.cat((x, self.init_padding), 1)

print the sizes in forward function it will help to debug.