Variable(torch.FloatTensor(train_x).type(dtypeFloat), requires_grad=False) doesn't support variable size arrays

Hello,

l have the following varible numpy array called train_x

type(train_x)
<class 'numpy.ndarray'>

where

train_x.shape
(5,)

and each train_x has a different length.

train_x[0]=dim(16,3)
train_x[4]=dim(18,3)
train_x[3]=dim(27,3)

when l make train_x in a variable:
train_x = Variable(torch.FloatTensor(train_x).type(dtypeFloat), requires_grad=False)

l get 5 scalars

train_x
Variable containing:
-0.0000
 0.0000
 1.6112
 0.0000
 0.0000
[torch.cuda.FloatTensor of size 5 (GPU 0)]

What is wrong with my code.

Here is a sample of my_train_x

train_x
array([array([[  0, 128,   0],
       [240, 128, 128],
       [ 30, 144, 255],
       [124, 252,   0],
       [  0, 191, 255],
       [  0,   0, 205],
       [233, 150, 122],
       [240, 128, 128],
       [240, 128, 128],
       [  0,   0, 139],
       [250, 128, 114],
       [  0, 128,   0],
       [220,  20,  60],
       [ 70, 130, 180],
 [ 30, 144, 255],
       [233, 150, 122],
       [ 34, 139,  34]]),
       array([[250, 128, 114],
       [250, 128, 114],
       [255, 160, 122],
       [  0, 128,   0],
       [100, 149, 237],
       [240, 128, 128],
       [ 65, 105, 225],
       [ 70, 130, 180],
       [ 70, 130, 180],
       [220,  20,  60],
       [205,  92,  92],
       [127, 255,   0],
       [220,  20,  60],
       [250, 128, 114],
       [  0, 128,   0],
   [240, 128, 128],
       [ 65, 105, 225],
       [ 70, 130, 180],
       [ 70, 130, 180],
       [220,  20,  60],
       [205,  92,  92],
       [173, 255,  47]]),
       array([[ 50, 205,  50],
       [220,  20,  60],
       [ 70, 130, 180],
       [154, 205,  50],
       [ 70, 130, 180],
       [  0,   0, 205],
       [250, 128, 114],
       [220,  20,  60],
       [255, 160, 122],
       [  0,   0, 128],
       [250, 128, 114],
       [127, 255,   0],
       [233, 150, 122],
   [240, 128, 128],
       [ 65, 105, 225],
       [ 70, 130, 180],
       [ 70, 130, 180],
       [220,  20,  60],
       [205,  92,  92],
       [176, 224, 230],
       [205,  92,  92],
       [  0, 128,   0]]),
       array([[ 50, 205,  50],
       [255, 160, 122],
       [  0,   0, 205],
       [124, 252,   0],
       [  0,   0, 139],
       [176, 224, 230],
       [250, 128, 114],
       [220,  20,  60],
   [240, 128, 128],
       [ 65, 105, 225],
       [ 70, 130, 180],
       [ 70, 130, 180],
       [220,  20,  60],
       [205,  92,  92],
       [240, 128, 128],
       [100, 149, 237],
       [205,  92,  92],
       [173, 255,  47],
       [233, 150, 122],
       [  0,   0, 128],
       [255, 160, 122],
       [ 34, 139,  34]]),
       array([[176, 224, 230],
       [ 50, 205,  50],
       [  0, 128,   0],
       [  0,   0, 255],
       [  0, 128,   0],
   [240, 128, 128],
       [ 65, 105, 225],
       [ 70, 130, 180],
       [ 70, 130, 180],
       [220,  20,  60],
       [205,  92,  92],
       [  0,   0, 205],
       [124, 252,   0],
       [240, 128, 128],
       [  0,   0, 139],
       [  0, 128,   0],
       [250, 128, 114],
       [ 70, 130, 180],
       [ 50, 205,  50],
       [233, 150, 122],
       [ 65, 105, 225],
       [250, 128, 114]])], dtype=object)

Here is my code :


        batch_idx = [indices.popleft() for i in range(batch_size)]
        train_x, train_y, coord_train, adj_train = train_data[batch_idx], train_labels[batch_idx], coord_train_xy[
            batch_idx], train_adjacency_matrix[batch_idx]

        train_x = Variable(torch.FloatTensor(train_x).type(dtypeFloat), requires_grad=False)
        train_y = train_y.astype(np.int64)
        train_y = torch.LongTensor(train_y).type(dtypeLong)
        train_y = Variable(train_y, requires_grad=False)

        coord_train = torch.LongTensor(coord_train).type(dtypeLong)
        coord_train = Variable(coord_train, requires_grad=False)

        adj_train=Variable(torch.FloatTensor(adj_train).type(dtypeFloat), requires_grad=False)
        y = net.forward(train_x, dropout_value, L_train, lmax_train, coord_train, adj_train)

Hi,
A tensor must have a given size in each dimension. So you cannot have elements with different size on a single dimension.
Also to convert a numpy array to a torch Tensor, you should use the torch.from_numpy function to avoid any issue.