Thank you very much for your reply ptrblck.
This reply is both for documentation purposes as well as an invitation to help.
I flattened the tensors using
features = flatten(features, start_dim=0, end_dim=1)
changing the dimensions from torch.Size([3, 8760, 30]) to torch.Size([35040, 30]), which looks better.
Following your suggestions I changed:
model.forward(feature)
to
model.forward(feature.float())
ttps://pytorch.org/docs/stable/generated/torch.set_default_dtype.html
The default floating point dtype is initially torch.float32
.
ttps://pytorch.org/docs/stable/generated/torch.set_default_dtype.html - explains that…
add torch.set_default_dtype(torch.float64)
after
super(NeuralNetwork, self).__init__()
As you suggested here should change the dtype to float64 but I get the error
undefinded variable from import float64
despite addint “import torch” prior to the class definition
since this is just a matter of precision and I just want a running Model for starters - I am fine with that
output = F.relu(self.l3(output))
return output
to
return self.l3(output)
executing
model.forward(feature.float())
causes a
RuntimeError: mat1 and mat2 shapes cannot be multiplied...
From your respons in this thread the following line should be wrong, since the inputs is numberOfFeatures * batchsize
self.l0 = nn.Linear(numberOfFeatures, 1024)
but changing this line to
self.l0 = nn.Linear(numberOfFeatures *batch_size, 1024)
still causes the following:
RuntimeError: mat1 and mat2 shapes cannot be multiplied (10x30 and 300x1024)
hmm, new users cannot add more than 2 links