I’ve looked online for many resources and I can’t seem to find a solution.

So I have images that are 800x640 and I will be training on this shape and evaluating at this shape.

When I try to convert my data to a torch.Tensor, I get the following error:

`X = torch.Tensor([i[0] for i in data])`

| `ValueError: expected sequence of length 800 at dim 1 (got 640)`

I have tried padding my images to a square (800x800) padding black on the extra parts, Still get the same error. Here is my current code with the padding method. I would like to be able to just use 800x640 if possible.

```
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv1 = nn.Conv2d(1, 64, 10)
self.conv2 = nn.Conv2d(64, 128, 10)
self.conv3 = nn.Conv2d(128, 256, 10)
x = torch.randn(800, 800).view(-1, 1, 800, 800)
x = F.max_pool2d(F.relu(self.conv1(x)), (5, 5))
x = F.max_pool2d(F.relu(self.conv2(x)), (5, 5))
x = F.max_pool2d(F.relu(self.conv3(x)), (5, 5))
# 2048 is flattened tensor of [1, 256, 4, 4] from previous output
# print(x.shape)
self.fc1 = nn.Linear(4096, 512)
self.fc2 = nn.Linear(512, 2)
def forward(self, x):
x = x.view(-1, 4096)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.softmax(x, dim=1)
```

```
data = np.load(f'{os.path.dirname(__file__)}/data.npy', allow_pickle=True)
X = torch.Tensor([i[0] for i in data]) <-- error here
```