Im new to PyTorch and neural networks and am having some trouble figuring out tensor shaping.
I have a task to make a neural network that can distingush two intertwined rectangular spirals.
Running x.shape on the provided data gives [2592100, 2] which I am passing to a linear layer, so im not sure how to shape my tensors around this. I have used x = x.view(x.size(0), -1) but this doesnt seem to remedy the issue.
Your model works fine for the mentioned input shape, if layer=1 is used:
model = Network(layer=1, hid=16)
x = torch.randn(2592100, 2)
out = model(x)
print(out.shape)
> torch.Size([2592100, 2])
model = Network(layer=0, hid=16)
x = torch.randn(2592100, 1)
out = model(x)
print(out.shape)
> torch.Size([2592100, 2])
The self.long block expects a single input feature, so an input in the shape [batch_size, 1] would also work.
Since your input has only two dimensions, x.view(x.size(0), -1) won’t change the shape and you could remove it.
No, 2592100 is the batch size and the dim1 size defines the input features in your example.
As given in the previous code snippet, your model works fine for an input in the shape [batch_size, 2] assuming the self.short module is used and [batch_size, 1] if the self.long module is used.
It seems you’ve now changed the in_features of the linear layer in self.long to 2592100, which is also wrong.
If your input has a shape of [batch_size, 2], use nn.Linear(2, hid) for the first linear layer in self.short and self.long.