ValueError: expected sequence of length x at dim 1 (got y)

I’ve looked online for many resources and I can’t seem to find a solution.

So I have images that are 800x640 and I will be training on this shape and evaluating at this shape.
When I try to convert my data to a torch.Tensor, I get the following error:
X = torch.Tensor([i[0] for i in data]) | ValueError: expected sequence of length 800 at dim 1 (got 640)

I have tried padding my images to a square (800x800) padding black on the extra parts, Still get the same error. Here is my current code with the padding method. I would like to be able to just use 800x640 if possible.

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()

        self.conv1 = nn.Conv2d(1, 64, 10)
        self.conv2 = nn.Conv2d(64, 128, 10)
        self.conv3 = nn.Conv2d(128, 256, 10)

        x = torch.randn(800, 800).view(-1, 1, 800, 800)

        x = F.max_pool2d(F.relu(self.conv1(x)), (5, 5))
        x = F.max_pool2d(F.relu(self.conv2(x)), (5, 5))
        x = F.max_pool2d(F.relu(self.conv3(x)), (5, 5))

        # 2048 is flattened tensor of [1, 256, 4, 4] from previous output
        # print(x.shape)
        self.fc1 = nn.Linear(4096, 512)
        self.fc2 = nn.Linear(512, 2)

    def forward(self, x):
        x = x.view(-1, 4096)
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return F.softmax(x, dim=1)
    data = np.load(f'{os.path.dirname(__file__)}/data.npy', allow_pickle=True)
    X = torch.Tensor([i[0] for i in data])  <-- error here

Since you’ve already loaded the data as a numpy array, you should be able to use:

X = torch.from_numpy(data)

Note that this method will share the underlying storage. If you change X inplace, these changes will also be applied to data.
If you don’t want this behavior, use torch.from_numpy(data).clone(), which will create a copy.

Using the method above yielded:
X = torch.from_numpy(data).clone() | TypeError: can't convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, int64, int32, int16, int8, uint8, and bool.

    X = torch.from_numpy(data).clone()   <-- error here
    X = X / 255.0 # scale pixel values to [0, 1]
    y = torch.Tensor([i[1] for i in data])

edit: I have also changed my model back to taking in a picture of width: 640 and height: 800 since it seems you can certainly train on any dimensions you want as long as you eval at the same dimensions.

x = torch.randn(800, 640).view(-1, 1, 800, 640)

So I found out some of my data was 640x640 and have padded those images to 640x800. Now that all my data is of the same length I’ve retried the code:
X = torch.Tensor([i[0] for i in data])
which I now get an error of:
TypeError: only size-1 arrays can be converted to Python scalars

I am able to make a list of tensors of the data with:

    # X = []
    # for i in data:
    #     t = torch.Tensor(i[0])
    #     X.append(t)

however I think it is true that you need to feed the model a single tensor so I have to convert the list of tensors to a tensor of tensors?