I was working with some Pytorch Model when I got this error. At first glance I thought that this issue might have raised because the model Expect (Batch x the_3d_size)
for Image. But then I thought that Pytorch have the functionality to directly pass both batched and non-batched data
into the inherited classes
of nn.Module
. Like this
layer = nn.Linear(4 , 5)
non_batched_input = torch.rand(4)
layer(non_batched_input)
Output = tensor([-0.1435, 0.4662, -0.6626, -0.3902, 0.4086], grad_fn=<ViewBackward0>)
batched_input = torch.rand(5 , 4)
layer(batched_input)
Output = tensor([[-0.5876, 0.2451, -0.7482, -0.1869, 0.0120],
[-0.2541, 0.2187, -0.5797, -0.1493, 0.4184],
[-0.3964, 0.1388, -0.5467, -0.0359, 0.2836],
[-0.2042, 0.5370, -0.6630, -0.4109, 0.6399],
[-0.2992, 0.1950, -0.6337, -0.1641, 0.0830]],
grad_fn=<AddmmBackward0>)
I also visited the same Issue on the forums. But that was related to batch. My question is why is Pytorch at this point not able to recognize the batched and the non batched-data…?
This is specifically not an Issue as it has been already solved here with the add on of Single LOC
valid_result = model(image_valid.unsqueeze(0))